Test Report: Docker_Linux 16124

                    
                      eeac85fe476c751393a203217177d94606b81c9d:2023-03-21:28422
                    
                

Test fail (2/313)

Order failed test Duration
205 TestMultiNode/serial/DeployApp2Nodes 5.86
206 TestMultiNode/serial/PingHostFrom2Pods 3.08
x
+
TestMultiNode/serial/DeployApp2Nodes (5.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860915 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860915 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-860915 -- rollout status deployment/busybox: (1.897045223s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860915 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:496: expected 2 Pod IPs but got 1, output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860915 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860915 -- exec busybox-6b86dd6d48-62ggt -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860915 -- exec busybox-6b86dd6d48-kpfz8 -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-860915 -- exec busybox-6b86dd6d48-kpfz8 -- nslookup kubernetes.io: exit status 1 (165.719183ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:513: Pod busybox-6b86dd6d48-kpfz8 could not resolve 'kubernetes.io': exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860915 -- exec busybox-6b86dd6d48-62ggt -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860915 -- exec busybox-6b86dd6d48-kpfz8 -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-860915 -- exec busybox-6b86dd6d48-kpfz8 -- nslookup kubernetes.default: exit status 1 (169.892281ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:523: Pod busybox-6b86dd6d48-kpfz8 could not resolve 'kubernetes.default': exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860915 -- exec busybox-6b86dd6d48-62ggt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860915 -- exec busybox-6b86dd6d48-kpfz8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-860915 -- exec busybox-6b86dd6d48-kpfz8 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (175.337798ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:531: Pod busybox-6b86dd6d48-kpfz8 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-860915
helpers_test.go:235: (dbg) docker inspect multinode-860915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cea2236b98324a951f72d1a79eb9f22278797303dadcf11048b54d9c63675ddc",
	        "Created": "2023-03-21T22:04:10.120040349Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 154141,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-21T22:04:10.448384332Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/cea2236b98324a951f72d1a79eb9f22278797303dadcf11048b54d9c63675ddc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cea2236b98324a951f72d1a79eb9f22278797303dadcf11048b54d9c63675ddc/hostname",
	        "HostsPath": "/var/lib/docker/containers/cea2236b98324a951f72d1a79eb9f22278797303dadcf11048b54d9c63675ddc/hosts",
	        "LogPath": "/var/lib/docker/containers/cea2236b98324a951f72d1a79eb9f22278797303dadcf11048b54d9c63675ddc/cea2236b98324a951f72d1a79eb9f22278797303dadcf11048b54d9c63675ddc-json.log",
	        "Name": "/multinode-860915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-860915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-860915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/85c72d93c1028292580b6028ff6ac407946625266bfff9710ac971ef0ed4fc83-init/diff:/var/lib/docker/overlay2/d640a49204b62cbdd456945d4d005345a58620b62ae9b33d65049d1c993396b8/diff:/var/lib/docker/overlay2/2f69ca1a3446908a3a75abc91f938fabe5666af6aeb8015b4624852cff4cddf4/diff:/var/lib/docker/overlay2/77826550c3b08610fd851464ed2b7833a274ce77dd51835381cdb9c21b556c7d/diff:/var/lib/docker/overlay2/e15956ab42b8efa1672992b84fed94e79fbbeae307eec145f36b8093817fbc9d/diff:/var/lib/docker/overlay2/f89b982ab58387313cef069aedcdc102b85e2564f1414edf0b099b6d06e8d760/diff:/var/lib/docker/overlay2/7327a750743ed9373f2f5681004c7795a4b64f5704efcb57ee5e29ab3757844d/diff:/var/lib/docker/overlay2/01a1b9b43163306f4f6240c5ba892c673598ce3b971a08a0a97fe0f5239214db/diff:/var/lib/docker/overlay2/bf4d055c40227bfa14f549a18ded4142ad9306fd9458230679aaa4900118281b/diff:/var/lib/docker/overlay2/74353c9e2d5dc25abe40c7777e0c09395af27bb7a8a3e18498bf7904846b7f11/diff:/var/lib/docker/overlay2/fe3c07
69566c45228c4f0c59f9f20d0a974e3493d4468ee806436c1fbc085a8f/diff:/var/lib/docker/overlay2/05557f82a049377810342eaad5167446fffe852231ffc75334cb98105537c915/diff:/var/lib/docker/overlay2/0d8fe544a42c85fa45a0902d36c933192fa8315b60a92c196f5a416ea55bebc2/diff:/var/lib/docker/overlay2/33a688c843fa0b7360dc919ee277bdcda578b2c83406f9fc5cf8859bb20439fe/diff:/var/lib/docker/overlay2/627d3c89f753c6719656c148f1bb6c9bb4a106753297be3a3ba7efe924e398e3/diff:/var/lib/docker/overlay2/06bb92ecbc5497dbcd6cb4f5e86dbad24fe99e250f8004cec607f90003af9137/diff:/var/lib/docker/overlay2/f42dc72746bc9fc6065a63dee52653a285840ff1dc5ee7aad14b1e7cafb0475a/diff:/var/lib/docker/overlay2/5c4e7423869c8634195ccedb2ad5c6fad22fbda5790e5761deb4f224967328eb/diff:/var/lib/docker/overlay2/8691a3dd8c2958c2e4bace0a06058278ebdb723d8409115a9cdbe0f792fa44c5/diff:/var/lib/docker/overlay2/85ac776fea5b19e198189e15e3f73cc69377ed4c78d93b05d1a9c2a8deaa4747/diff:/var/lib/docker/overlay2/3b812c4414ef925986f30a5a9b882e40f03ee837f5d194ac8aa3057989b7105d/diff:/var/lib/d
ocker/overlay2/5f251bd97e750c54f037dd59f3f4f8d219015fb750d9fdc760c223b79dcafa21/diff:/var/lib/docker/overlay2/5666d5e81373d3ebaec11d38c62fe4981ec0220630d33665e71d5ea1d81809ca/diff:/var/lib/docker/overlay2/5792555264d0709dc72fe4cb189ff0d80530c20670d4d9f056cf6c03792d9b30/diff:/var/lib/docker/overlay2/9c49a7384955edf6efb79c488e86f1d4b40cc81a57cf0d014243e0863d095054/diff:/var/lib/docker/overlay2/6e36cdb4177b2c7d49fe1ee1b4dc25a61c12fee623506ab19552e2ae8742235d/diff:/var/lib/docker/overlay2/9e066b30f26fd8c76ac87635130e55dfcc7c8865a5039b98735ee1c04266b065/diff:/var/lib/docker/overlay2/3612d1a6aa3e0e19293da12b3afe1e28522248ab181db4001942f6bf17eee0af/diff:/var/lib/docker/overlay2/70c7cccc16141dac653158cec14640abd099db393b3c90bb1af94507efed3f2b/diff:/var/lib/docker/overlay2/36f9d3b3eb7d184762018b2ccf3682c78b2997003c52951acdf3d57d5f668513/diff:/var/lib/docker/overlay2/78a72850c17cb0d5bc811600906e759868b4f83bd5e24b09c8f8410c52bc05bc/diff:/var/lib/docker/overlay2/37f6e96371561f325d1feb1c6af3e0f272fddd9f52bf01b650fe3281d6a
900ef/diff:/var/lib/docker/overlay2/c1c9ec5dbda3b0a53eb1c677bb46ec48a6851a8b8e3cfc5dbd46221b31aa3f1e/diff:/var/lib/docker/overlay2/e1648eef5200cf93385333f2ec689b44c2bcda83e09ab0386b268b42beb65592/diff:/var/lib/docker/overlay2/db69e6572eda782bd31d2ce0e4712d18c6aea21e1c07fa3db7316703d4134d66/diff:/var/lib/docker/overlay2/886164b1ee98c6ea98e2b536acfb100958b631d7f0a728ef6d5df5ad3a6200d8/diff:/var/lib/docker/overlay2/1181f6b9ebac02faf5e0f7cc46878bd68f6445538dd92f2d543b5044f6d18086/diff:/var/lib/docker/overlay2/47c234d852a21d6a8d75aa0538ac1629cef07994d3203f204c35c6331320983e/diff:/var/lib/docker/overlay2/ab6bf2566e0df27f85567c3169d4a2484be237abe237f02da1d144eae02eb2a6/diff:/var/lib/docker/overlay2/fffc78b1cef142380fa7093faac387c6bc601298db1fbff1b96978271b9aedb1/diff:/var/lib/docker/overlay2/00e44020633971002e0909c887ed95319d30728c1006a38c1aa9478ca2e20349/diff:/var/lib/docker/overlay2/2428fd5b50e8eec63b30fd533c95c5a69a1f50c93cad081ab7d4919549d7dfca/diff:/var/lib/docker/overlay2/82d0bc3e11a8925c2176e9c8536f8b1a628915
149712d6c580274cce9c037a7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85c72d93c1028292580b6028ff6ac407946625266bfff9710ac971ef0ed4fc83/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85c72d93c1028292580b6028ff6ac407946625266bfff9710ac971ef0ed4fc83/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85c72d93c1028292580b6028ff6ac407946625266bfff9710ac971ef0ed4fc83/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-860915",
	                "Source": "/var/lib/docker/volumes/multinode-860915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-860915",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-860915",
	                "name.minikube.sigs.k8s.io": "multinode-860915",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4b5b5d79b7895400795904c26cab7b38fbbe49979122c2a962e47f82bb4fb74a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32852"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32851"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32848"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32850"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32849"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4b5b5d79b789",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-860915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cea2236b9832",
	                        "multinode-860915"
	                    ],
	                    "NetworkID": "4324ac65e556d4a4c34c0ca93ae29a3fc50c655ceca50aaea9b992e52a60d35d",
	                    "EndpointID": "e239b70323757b86270de8c8fad89b6de84738686066dbc427155b96dc44a251",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-860915 -n multinode-860915
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-860915 logs -n 25: (1.05148916s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p second-666791                                  | second-666791        | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| delete  | -p second-666791                                  | second-666791        | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	| delete  | -p first-664009                                   | first-664009         | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	| start   | -p mount-start-1-353660                           | mount-start-1-353660 | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| ssh     | mount-start-1-353660 ssh -- ls                    | mount-start-1-353660 | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| start   | -p mount-start-2-370454                           | mount-start-2-370454 | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| ssh     | mount-start-2-370454 ssh -- ls                    | mount-start-2-370454 | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-353660                           | mount-start-1-353660 | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-370454 ssh -- ls                    | mount-start-2-370454 | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-370454                           | mount-start-2-370454 | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	| start   | -p mount-start-2-370454                           | mount-start-2-370454 | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	| ssh     | mount-start-2-370454 ssh -- ls                    | mount-start-2-370454 | jenkins | v1.29.0 | 21 Mar 23 22:04 UTC | 21 Mar 23 22:04 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-370454                           | mount-start-2-370454 | jenkins | v1.29.0 | 21 Mar 23 22:04 UTC | 21 Mar 23 22:04 UTC |
	| delete  | -p mount-start-1-353660                           | mount-start-1-353660 | jenkins | v1.29.0 | 21 Mar 23 22:04 UTC | 21 Mar 23 22:04 UTC |
	| start   | -p multinode-860915                               | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:04 UTC | 21 Mar 23 22:05 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- apply -f                   | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- rollout                    | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- get pods -o                | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- get pods -o                | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- exec                       | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | busybox-6b86dd6d48-62ggt --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- exec                       | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC |                     |
	|         | busybox-6b86dd6d48-kpfz8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- exec                       | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | busybox-6b86dd6d48-62ggt --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- exec                       | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC |                     |
	|         | busybox-6b86dd6d48-kpfz8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- exec                       | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | busybox-6b86dd6d48-62ggt -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- exec                       | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC |                     |
	|         | busybox-6b86dd6d48-kpfz8 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/21 22:04:03
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0321 22:04:03.744505  153142 out.go:296] Setting OutFile to fd 1 ...
	I0321 22:04:03.744728  153142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:04:03.744738  153142 out.go:309] Setting ErrFile to fd 2...
	I0321 22:04:03.744742  153142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:04:03.744841  153142 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-3841/.minikube/bin
	I0321 22:04:03.745387  153142 out.go:303] Setting JSON to false
	I0321 22:04:03.746750  153142 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2796,"bootTime":1679433448,"procs":891,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0321 22:04:03.746811  153142 start.go:135] virtualization: kvm guest
	I0321 22:04:03.749615  153142 out.go:177] * [multinode-860915] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0321 22:04:03.751151  153142 out.go:177]   - MINIKUBE_LOCATION=16124
	I0321 22:04:03.751166  153142 notify.go:220] Checking for updates...
	I0321 22:04:03.752884  153142 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0321 22:04:03.754552  153142 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 22:04:03.756081  153142 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-3841/.minikube
	I0321 22:04:03.757403  153142 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0321 22:04:03.758749  153142 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0321 22:04:03.760175  153142 driver.go:365] Setting default libvirt URI to qemu:///system
	I0321 22:04:03.827539  153142 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0321 22:04:03.827662  153142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0321 22:04:03.945489  153142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:32 SystemTime:2023-03-21 22:04:03.936970375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0321 22:04:03.945585  153142 docker.go:294] overlay module found
	I0321 22:04:03.947515  153142 out.go:177] * Using the docker driver based on user configuration
	I0321 22:04:03.949298  153142 start.go:295] selected driver: docker
	I0321 22:04:03.949309  153142 start.go:856] validating driver "docker" against <nil>
	I0321 22:04:03.949318  153142 start.go:867] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0321 22:04:03.950009  153142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0321 22:04:04.065279  153142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:32 SystemTime:2023-03-21 22:04:04.057187513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0321 22:04:04.065407  153142 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0321 22:04:04.065683  153142 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0321 22:04:04.067535  153142 out.go:177] * Using Docker driver with root privileges
	I0321 22:04:04.068931  153142 cni.go:84] Creating CNI manager for ""
	I0321 22:04:04.068947  153142 cni.go:136] 0 nodes found, recommending kindnet
	I0321 22:04:04.068954  153142 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0321 22:04:04.068965  153142 start_flags.go:319] config:
	{Name:multinode-860915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-860915 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 22:04:04.070684  153142 out.go:177] * Starting control plane node multinode-860915 in cluster multinode-860915
	I0321 22:04:04.072065  153142 cache.go:120] Beginning downloading kic base image for docker with docker
	I0321 22:04:04.073462  153142 out.go:177] * Pulling base image ...
	I0321 22:04:04.074806  153142 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0321 22:04:04.074838  153142 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0321 22:04:04.074847  153142 cache.go:57] Caching tarball of preloaded images
	I0321 22:04:04.074899  153142 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0321 22:04:04.074915  153142 preload.go:174] Found /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0321 22:04:04.074925  153142 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0321 22:04:04.075216  153142 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/config.json ...
	I0321 22:04:04.075236  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/config.json: {Name:mk88dbb8da7413ed3f2bbb1b1a154d821228fcde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:04.138767  153142 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0321 22:04:04.138792  153142 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0321 22:04:04.138810  153142 cache.go:193] Successfully downloaded all kic artifacts
	I0321 22:04:04.138846  153142 start.go:364] acquiring machines lock for multinode-860915: {Name:mk71a5a6463f94b190d019928f9ca0fdae04ca58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0321 22:04:04.138941  153142 start.go:368] acquired machines lock for "multinode-860915" in 76.848µs
	I0321 22:04:04.138964  153142 start.go:93] Provisioning new machine with config: &{Name:multinode-860915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-860915 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0321 22:04:04.139058  153142 start.go:125] createHost starting for "" (driver="docker")
	I0321 22:04:04.141298  153142 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0321 22:04:04.141509  153142 start.go:159] libmachine.API.Create for "multinode-860915" (driver="docker")
	I0321 22:04:04.141536  153142 client.go:168] LocalClient.Create starting
	I0321 22:04:04.141630  153142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem
	I0321 22:04:04.141660  153142 main.go:141] libmachine: Decoding PEM data...
	I0321 22:04:04.141676  153142 main.go:141] libmachine: Parsing certificate...
	I0321 22:04:04.141727  153142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem
	I0321 22:04:04.141746  153142 main.go:141] libmachine: Decoding PEM data...
	I0321 22:04:04.141754  153142 main.go:141] libmachine: Parsing certificate...
	I0321 22:04:04.142063  153142 cli_runner.go:164] Run: docker network inspect multinode-860915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0321 22:04:04.204809  153142 cli_runner.go:211] docker network inspect multinode-860915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0321 22:04:04.204884  153142 network_create.go:281] running [docker network inspect multinode-860915] to gather additional debugging logs...
	I0321 22:04:04.204905  153142 cli_runner.go:164] Run: docker network inspect multinode-860915
	W0321 22:04:04.265273  153142 cli_runner.go:211] docker network inspect multinode-860915 returned with exit code 1
	I0321 22:04:04.265308  153142 network_create.go:284] error running [docker network inspect multinode-860915]: docker network inspect multinode-860915: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-860915 not found
	I0321 22:04:04.265322  153142 network_create.go:286] output of [docker network inspect multinode-860915]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-860915 not found
	
	** /stderr **
	I0321 22:04:04.265370  153142 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0321 22:04:04.331927  153142 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-068451d2c467 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:66:9e:04:2f} reservation:<nil>}
	I0321 22:04:04.332388  153142 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001614640}
	I0321 22:04:04.332419  153142 network_create.go:123] attempt to create docker network multinode-860915 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0321 22:04:04.332458  153142 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-860915 multinode-860915
	I0321 22:04:04.431448  153142 network_create.go:107] docker network multinode-860915 192.168.58.0/24 created
	I0321 22:04:04.431475  153142 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-860915" container
	I0321 22:04:04.431523  153142 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0321 22:04:04.494901  153142 cli_runner.go:164] Run: docker volume create multinode-860915 --label name.minikube.sigs.k8s.io=multinode-860915 --label created_by.minikube.sigs.k8s.io=true
	I0321 22:04:04.559292  153142 oci.go:103] Successfully created a docker volume multinode-860915
	I0321 22:04:04.559366  153142 cli_runner.go:164] Run: docker run --rm --name multinode-860915-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-860915 --entrypoint /usr/bin/test -v multinode-860915:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
	I0321 22:04:05.132509  153142 oci.go:107] Successfully prepared a docker volume multinode-860915
	I0321 22:04:05.132565  153142 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0321 22:04:05.132591  153142 kic.go:190] Starting extracting preloaded images to volume ...
	I0321 22:04:05.132656  153142 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-860915:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
	I0321 22:04:09.938515  153142 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-860915:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir: (4.805787078s)
	I0321 22:04:09.938551  153142 kic.go:199] duration metric: took 4.805957 seconds to extract preloaded images to volume
	W0321 22:04:09.938706  153142 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0321 22:04:09.938811  153142 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0321 22:04:10.056157  153142 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-860915 --name multinode-860915 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-860915 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-860915 --network multinode-860915 --ip 192.168.58.2 --volume multinode-860915:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978
	I0321 22:04:10.456490  153142 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Running}}
	I0321 22:04:10.525789  153142 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Status}}
	I0321 22:04:10.592592  153142 cli_runner.go:164] Run: docker exec multinode-860915 stat /var/lib/dpkg/alternatives/iptables
	I0321 22:04:10.709843  153142 oci.go:144] the created container "multinode-860915" has a running status.
	I0321 22:04:10.709884  153142 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa...
	I0321 22:04:10.753358  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0321 22:04:10.753416  153142 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0321 22:04:10.876805  153142 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Status}}
	I0321 22:04:10.947350  153142 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0321 22:04:10.947375  153142 kic_runner.go:114] Args: [docker exec --privileged multinode-860915 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0321 22:04:11.066972  153142 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Status}}
	I0321 22:04:11.136867  153142 machine.go:88] provisioning docker machine ...
	I0321 22:04:11.136929  153142 ubuntu.go:169] provisioning hostname "multinode-860915"
	I0321 22:04:11.136983  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:11.199873  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:04:11.200308  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0321 22:04:11.200325  153142 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-860915 && echo "multinode-860915" | sudo tee /etc/hostname
	I0321 22:04:11.322112  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-860915
	
	I0321 22:04:11.322184  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:11.387413  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:04:11.387822  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0321 22:04:11.387841  153142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-860915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-860915/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-860915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0321 22:04:11.501517  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0321 22:04:11.501546  153142 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16124-3841/.minikube CaCertPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16124-3841/.minikube}
	I0321 22:04:11.501578  153142 ubuntu.go:177] setting up certificates
	I0321 22:04:11.501587  153142 provision.go:83] configureAuth start
	I0321 22:04:11.501636  153142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860915
	I0321 22:04:11.565024  153142 provision.go:138] copyHostCerts
	I0321 22:04:11.565060  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem
	I0321 22:04:11.565090  153142 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem, removing ...
	I0321 22:04:11.565098  153142 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem
	I0321 22:04:11.565162  153142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem (1082 bytes)
	I0321 22:04:11.565234  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem
	I0321 22:04:11.565252  153142 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem, removing ...
	I0321 22:04:11.565256  153142 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem
	I0321 22:04:11.565281  153142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem (1123 bytes)
	I0321 22:04:11.565321  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem
	I0321 22:04:11.565336  153142 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem, removing ...
	I0321 22:04:11.565342  153142 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem
	I0321 22:04:11.565364  153142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem (1675 bytes)
	I0321 22:04:11.565408  153142 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca-key.pem org=jenkins.multinode-860915 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-860915]
	I0321 22:04:11.666578  153142 provision.go:172] copyRemoteCerts
	I0321 22:04:11.666641  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0321 22:04:11.666673  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:11.733368  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:04:11.816832  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0321 22:04:11.816886  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0321 22:04:11.833484  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0321 22:04:11.833540  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0321 22:04:11.849328  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0321 22:04:11.849387  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0321 22:04:11.865155  153142 provision.go:86] duration metric: configureAuth took 363.554942ms
	I0321 22:04:11.865181  153142 ubuntu.go:193] setting minikube options for container-runtime
	I0321 22:04:11.865342  153142 config.go:182] Loaded profile config "multinode-860915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0321 22:04:11.865387  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:11.927979  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:04:11.928380  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0321 22:04:11.928394  153142 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0321 22:04:12.041503  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0321 22:04:12.041535  153142 ubuntu.go:71] root file system type: overlay
	I0321 22:04:12.041679  153142 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0321 22:04:12.041751  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:12.106236  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:04:12.106635  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0321 22:04:12.106695  153142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0321 22:04:12.225943  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0321 22:04:12.226039  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:12.288216  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:04:12.288677  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0321 22:04:12.288698  153142 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0321 22:04:12.901398  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-21 22:04:12.219688201 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0321 22:04:12.901432  153142 machine.go:91] provisioned docker machine in 1.764541806s
	I0321 22:04:12.901442  153142 client.go:171] LocalClient.Create took 8.759901007s
	I0321 22:04:12.901465  153142 start.go:167] duration metric: libmachine.API.Create for "multinode-860915" took 8.75995422s
	I0321 22:04:12.901477  153142 start.go:300] post-start starting for "multinode-860915" (driver="docker")
	I0321 22:04:12.901484  153142 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0321 22:04:12.901551  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0321 22:04:12.901599  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:12.964498  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:04:13.053201  153142 ssh_runner.go:195] Run: cat /etc/os-release
	I0321 22:04:13.055725  153142 command_runner.go:130] > NAME="Ubuntu"
	I0321 22:04:13.055743  153142 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0321 22:04:13.055747  153142 command_runner.go:130] > ID=ubuntu
	I0321 22:04:13.055752  153142 command_runner.go:130] > ID_LIKE=debian
	I0321 22:04:13.055756  153142 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0321 22:04:13.055760  153142 command_runner.go:130] > VERSION_ID="20.04"
	I0321 22:04:13.055767  153142 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0321 22:04:13.055774  153142 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0321 22:04:13.055782  153142 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0321 22:04:13.055798  153142 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0321 22:04:13.055811  153142 command_runner.go:130] > VERSION_CODENAME=focal
	I0321 22:04:13.055817  153142 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0321 22:04:13.055885  153142 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0321 22:04:13.055898  153142 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0321 22:04:13.055906  153142 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0321 22:04:13.055912  153142 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0321 22:04:13.055920  153142 filesync.go:126] Scanning /home/jenkins/minikube-integration/16124-3841/.minikube/addons for local assets ...
	I0321 22:04:13.055960  153142 filesync.go:126] Scanning /home/jenkins/minikube-integration/16124-3841/.minikube/files for local assets ...
	I0321 22:04:13.056024  153142 filesync.go:149] local asset: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem -> 105322.pem in /etc/ssl/certs
	I0321 22:04:13.056034  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem -> /etc/ssl/certs/105322.pem
	I0321 22:04:13.056109  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0321 22:04:13.062191  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem --> /etc/ssl/certs/105322.pem (1708 bytes)
	I0321 22:04:13.078761  153142 start.go:303] post-start completed in 177.272012ms
	I0321 22:04:13.079110  153142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860915
	I0321 22:04:13.142344  153142 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/config.json ...
	I0321 22:04:13.142577  153142 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0321 22:04:13.142614  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:13.206109  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:04:13.285871  153142 command_runner.go:130] > 17%!
	(MISSING)I0321 22:04:13.286068  153142 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0321 22:04:13.289661  153142 command_runner.go:130] > 244G
	I0321 22:04:13.289689  153142 start.go:128] duration metric: createHost completed in 9.150623099s
	I0321 22:04:13.289700  153142 start.go:83] releasing machines lock for "multinode-860915", held for 9.150747768s
	I0321 22:04:13.289768  153142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860915
	I0321 22:04:13.350474  153142 ssh_runner.go:195] Run: cat /version.json
	I0321 22:04:13.350529  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:13.350487  153142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0321 22:04:13.350633  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:13.420136  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:04:13.421481  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:04:13.535295  153142 command_runner.go:130] > {"iso_version": "v1.29.0-1678210391-15973", "kicbase_version": "v0.0.37-1679075007-16079", "minikube_version": "v1.29.0", "commit": "e88c2b31272b40b6ab7f12032e3d1be586055049"}
	I0321 22:04:13.535383  153142 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0321 22:04:13.535456  153142 ssh_runner.go:195] Run: systemctl --version
	I0321 22:04:13.538764  153142 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.20)
	I0321 22:04:13.538796  153142 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0321 22:04:13.538929  153142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0321 22:04:13.542668  153142 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0321 22:04:13.542694  153142 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0321 22:04:13.542705  153142 command_runner.go:130] > Device: 36h/54d	Inode: 1322525     Links: 1
	I0321 22:04:13.542716  153142 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0321 22:04:13.542732  153142 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0321 22:04:13.542745  153142 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0321 22:04:13.542754  153142 command_runner.go:130] > Change: 2023-03-21 21:49:53.137271995 +0000
	I0321 22:04:13.542762  153142 command_runner.go:130] >  Birth: -
	I0321 22:04:13.542818  153142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0321 22:04:13.561901  153142 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0321 22:04:13.561956  153142 ssh_runner.go:195] Run: which cri-dockerd
	I0321 22:04:13.564471  153142 command_runner.go:130] > /usr/bin/cri-dockerd
	I0321 22:04:13.564630  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0321 22:04:13.571035  153142 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0321 22:04:13.582842  153142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0321 22:04:13.597154  153142 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0321 22:04:13.597181  153142 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0321 22:04:13.597198  153142 start.go:485] detecting cgroup driver to use...
	I0321 22:04:13.597231  153142 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0321 22:04:13.597338  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0321 22:04:13.608700  153142 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0321 22:04:13.608778  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0321 22:04:13.616003  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0321 22:04:13.623081  153142 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0321 22:04:13.623121  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0321 22:04:13.630219  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0321 22:04:13.637084  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0321 22:04:13.643854  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0321 22:04:13.650879  153142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0321 22:04:13.657544  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0321 22:04:13.666967  153142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0321 22:04:13.672922  153142 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0321 22:04:13.672976  153142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0321 22:04:13.678867  153142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0321 22:04:13.752695  153142 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0321 22:04:13.837727  153142 start.go:485] detecting cgroup driver to use...
	I0321 22:04:13.837776  153142 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0321 22:04:13.837828  153142 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0321 22:04:13.846591  153142 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0321 22:04:13.846616  153142 command_runner.go:130] > [Unit]
	I0321 22:04:13.846626  153142 command_runner.go:130] > Description=Docker Application Container Engine
	I0321 22:04:13.846635  153142 command_runner.go:130] > Documentation=https://docs.docker.com
	I0321 22:04:13.846640  153142 command_runner.go:130] > BindsTo=containerd.service
	I0321 22:04:13.846646  153142 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0321 22:04:13.846650  153142 command_runner.go:130] > Wants=network-online.target
	I0321 22:04:13.846655  153142 command_runner.go:130] > Requires=docker.socket
	I0321 22:04:13.846659  153142 command_runner.go:130] > StartLimitBurst=3
	I0321 22:04:13.846663  153142 command_runner.go:130] > StartLimitIntervalSec=60
	I0321 22:04:13.846667  153142 command_runner.go:130] > [Service]
	I0321 22:04:13.846672  153142 command_runner.go:130] > Type=notify
	I0321 22:04:13.846680  153142 command_runner.go:130] > Restart=on-failure
	I0321 22:04:13.846697  153142 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0321 22:04:13.846712  153142 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0321 22:04:13.846726  153142 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0321 22:04:13.846740  153142 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0321 22:04:13.846755  153142 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0321 22:04:13.846765  153142 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0321 22:04:13.846775  153142 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0321 22:04:13.846789  153142 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0321 22:04:13.846800  153142 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0321 22:04:13.846809  153142 command_runner.go:130] > ExecStart=
	I0321 22:04:13.846832  153142 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0321 22:04:13.846846  153142 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0321 22:04:13.846859  153142 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0321 22:04:13.846886  153142 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0321 22:04:13.846893  153142 command_runner.go:130] > LimitNOFILE=infinity
	I0321 22:04:13.846903  153142 command_runner.go:130] > LimitNPROC=infinity
	I0321 22:04:13.846910  153142 command_runner.go:130] > LimitCORE=infinity
	I0321 22:04:13.846919  153142 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0321 22:04:13.846932  153142 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0321 22:04:13.846942  153142 command_runner.go:130] > TasksMax=infinity
	I0321 22:04:13.846952  153142 command_runner.go:130] > TimeoutStartSec=0
	I0321 22:04:13.846963  153142 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0321 22:04:13.846973  153142 command_runner.go:130] > Delegate=yes
	I0321 22:04:13.846985  153142 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0321 22:04:13.846995  153142 command_runner.go:130] > KillMode=process
	I0321 22:04:13.847012  153142 command_runner.go:130] > [Install]
	I0321 22:04:13.847023  153142 command_runner.go:130] > WantedBy=multi-user.target
	I0321 22:04:13.847560  153142 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0321 22:04:13.847620  153142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0321 22:04:13.858227  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0321 22:04:13.869990  153142 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0321 22:04:13.870956  153142 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0321 22:04:13.976269  153142 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0321 22:04:14.053856  153142 docker.go:531] configuring docker to use "cgroupfs" as cgroup driver...
	I0321 22:04:14.053889  153142 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0321 22:04:14.078322  153142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0321 22:04:14.149044  153142 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0321 22:04:14.353281  153142 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0321 22:04:14.433535  153142 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0321 22:04:14.433604  153142 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0321 22:04:14.512865  153142 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0321 22:04:14.593622  153142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0321 22:04:14.669419  153142 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0321 22:04:14.680008  153142 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0321 22:04:14.680079  153142 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0321 22:04:14.682928  153142 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0321 22:04:14.682947  153142 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0321 22:04:14.682954  153142 command_runner.go:130] > Device: 3fh/63d	Inode: 206         Links: 1
	I0321 22:04:14.682964  153142 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0321 22:04:14.682975  153142 command_runner.go:130] > Access: 2023-03-21 22:04:14.671934836 +0000
	I0321 22:04:14.682983  153142 command_runner.go:130] > Modify: 2023-03-21 22:04:14.671934836 +0000
	I0321 22:04:14.682990  153142 command_runner.go:130] > Change: 2023-03-21 22:04:14.675935238 +0000
	I0321 22:04:14.683001  153142 command_runner.go:130] >  Birth: -
	I0321 22:04:14.683023  153142 start.go:553] Will wait 60s for crictl version
	I0321 22:04:14.683056  153142 ssh_runner.go:195] Run: which crictl
	I0321 22:04:14.685524  153142 command_runner.go:130] > /usr/bin/crictl
	I0321 22:04:14.685575  153142 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0321 22:04:14.762372  153142 command_runner.go:130] > Version:  0.1.0
	I0321 22:04:14.762392  153142 command_runner.go:130] > RuntimeName:  docker
	I0321 22:04:14.762398  153142 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0321 22:04:14.762406  153142 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0321 22:04:14.762423  153142 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0321 22:04:14.762467  153142 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0321 22:04:14.784502  153142 command_runner.go:130] > 23.0.1
	I0321 22:04:14.784590  153142 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0321 22:04:14.805223  153142 command_runner.go:130] > 23.0.1
	I0321 22:04:14.808863  153142 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
	I0321 22:04:14.808942  153142 cli_runner.go:164] Run: docker network inspect multinode-860915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0321 22:04:14.871231  153142 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0321 22:04:14.874308  153142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0321 22:04:14.883138  153142 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0321 22:04:14.883193  153142 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0321 22:04:14.899734  153142 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0321 22:04:14.899760  153142 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0321 22:04:14.899767  153142 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0321 22:04:14.899775  153142 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0321 22:04:14.899783  153142 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0321 22:04:14.899790  153142 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0321 22:04:14.899798  153142 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0321 22:04:14.899807  153142 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0321 22:04:14.900804  153142 docker.go:632] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0321 22:04:14.900830  153142 docker.go:562] Images already preloaded, skipping extraction
	I0321 22:04:14.900883  153142 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0321 22:04:14.919573  153142 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0321 22:04:14.919596  153142 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0321 22:04:14.919601  153142 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0321 22:04:14.919606  153142 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0321 22:04:14.919611  153142 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0321 22:04:14.919615  153142 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0321 22:04:14.919619  153142 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0321 22:04:14.919625  153142 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0321 22:04:14.920842  153142 docker.go:632] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0321 22:04:14.920865  153142 cache_images.go:84] Images are preloaded, skipping loading
	I0321 22:04:14.920911  153142 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0321 22:04:14.942590  153142 command_runner.go:130] > cgroupfs
	I0321 22:04:14.942631  153142 cni.go:84] Creating CNI manager for ""
	I0321 22:04:14.942640  153142 cni.go:136] 1 nodes found, recommending kindnet
	I0321 22:04:14.942656  153142 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0321 22:04:14.942672  153142 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-860915 NodeName:multinode-860915 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0321 22:04:14.942784  153142 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-860915"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0321 22:04:14.942881  153142 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-860915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:multinode-860915 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0321 22:04:14.942925  153142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0321 22:04:14.948975  153142 command_runner.go:130] > kubeadm
	I0321 22:04:14.948988  153142 command_runner.go:130] > kubectl
	I0321 22:04:14.948993  153142 command_runner.go:130] > kubelet
	I0321 22:04:14.949516  153142 binaries.go:44] Found k8s binaries, skipping transfer
	I0321 22:04:14.949580  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0321 22:04:14.955892  153142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0321 22:04:14.967690  153142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0321 22:04:14.979556  153142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0321 22:04:14.991317  153142 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0321 22:04:14.994079  153142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0321 22:04:15.002648  153142 certs.go:56] Setting up /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915 for IP: 192.168.58.2
	I0321 22:04:15.002686  153142 certs.go:186] acquiring lock for shared ca certs: {Name:mke51456f2089c678c8a8085b7dd3883448bd6b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:15.002813  153142 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.key
	I0321 22:04:15.002853  153142 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.key
	I0321 22:04:15.002902  153142 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key
	I0321 22:04:15.002913  153142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt with IP's: []
	I0321 22:04:15.127952  153142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt ...
	I0321 22:04:15.127979  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt: {Name:mk08d3ec2c118c71923c4a509551dcfa9361e19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:15.128131  153142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key ...
	I0321 22:04:15.128145  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key: {Name:mk072e10625389a05c2d097d968f2cb300fdc41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:15.128217  153142 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.key.cee25041
	I0321 22:04:15.128234  153142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0321 22:04:15.250113  153142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.crt.cee25041 ...
	I0321 22:04:15.250144  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.crt.cee25041: {Name:mk3eede047e77adf4f04779d508cf7739e315510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:15.250294  153142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.key.cee25041 ...
	I0321 22:04:15.250305  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.key.cee25041: {Name:mk672094b844df5edd66a3029ed0f0575a93df11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:15.250375  153142 certs.go:333] copying /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.crt
	I0321 22:04:15.250434  153142 certs.go:337] copying /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.key
	I0321 22:04:15.250480  153142 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.key
	I0321 22:04:15.250492  153142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.crt with IP's: []
	I0321 22:04:15.392358  153142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.crt ...
	I0321 22:04:15.392389  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.crt: {Name:mk24f3246f94d8a0d04a4cd8ba3a4340840af825 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:15.392539  153142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.key ...
	I0321 22:04:15.392550  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.key: {Name:mk516f91b566d06a1f166ce5c17af69261bf9a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:15.392614  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0321 22:04:15.392631  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0321 22:04:15.392642  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0321 22:04:15.392654  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0321 22:04:15.392666  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0321 22:04:15.392680  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0321 22:04:15.392694  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0321 22:04:15.392706  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0321 22:04:15.392753  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532.pem (1338 bytes)
	W0321 22:04:15.392785  153142 certs.go:397] ignoring /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532_empty.pem, impossibly tiny 0 bytes
	I0321 22:04:15.392797  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca-key.pem (1675 bytes)
	I0321 22:04:15.392820  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem (1082 bytes)
	I0321 22:04:15.392842  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem (1123 bytes)
	I0321 22:04:15.392863  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem (1675 bytes)
	I0321 22:04:15.392904  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem (1708 bytes)
	I0321 22:04:15.392927  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532.pem -> /usr/share/ca-certificates/10532.pem
	I0321 22:04:15.392940  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem -> /usr/share/ca-certificates/105322.pem
	I0321 22:04:15.392953  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:04:15.393461  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0321 22:04:15.410275  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0321 22:04:15.426112  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0321 22:04:15.441734  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0321 22:04:15.457375  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0321 22:04:15.473102  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0321 22:04:15.488716  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0321 22:04:15.504671  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0321 22:04:15.520658  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532.pem --> /usr/share/ca-certificates/10532.pem (1338 bytes)
	I0321 22:04:15.536364  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem --> /usr/share/ca-certificates/105322.pem (1708 bytes)
	I0321 22:04:15.551891  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0321 22:04:15.568292  153142 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0321 22:04:15.579982  153142 ssh_runner.go:195] Run: openssl version
	I0321 22:04:15.584195  153142 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0321 22:04:15.584322  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105322.pem && ln -fs /usr/share/ca-certificates/105322.pem /etc/ssl/certs/105322.pem"
	I0321 22:04:15.591218  153142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105322.pem
	I0321 22:04:15.593858  153142 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 21 21:53 /usr/share/ca-certificates/105322.pem
	I0321 22:04:15.593906  153142 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 21 21:53 /usr/share/ca-certificates/105322.pem
	I0321 22:04:15.593942  153142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105322.pem
	I0321 22:04:15.598198  153142 command_runner.go:130] > 3ec20f2e
	I0321 22:04:15.598263  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105322.pem /etc/ssl/certs/3ec20f2e.0"
	I0321 22:04:15.604852  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0321 22:04:15.611389  153142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:04:15.613929  153142 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 21 21:50 /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:04:15.613950  153142 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 21 21:50 /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:04:15.613983  153142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:04:15.618052  153142 command_runner.go:130] > b5213941
	I0321 22:04:15.618245  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0321 22:04:15.625048  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10532.pem && ln -fs /usr/share/ca-certificates/10532.pem /etc/ssl/certs/10532.pem"
	I0321 22:04:15.631810  153142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10532.pem
	I0321 22:04:15.634524  153142 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 21 21:53 /usr/share/ca-certificates/10532.pem
	I0321 22:04:15.634659  153142 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 21 21:53 /usr/share/ca-certificates/10532.pem
	I0321 22:04:15.634709  153142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10532.pem
	I0321 22:04:15.638900  153142 command_runner.go:130] > 51391683
	I0321 22:04:15.639126  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10532.pem /etc/ssl/certs/51391683.0"
	I0321 22:04:15.645803  153142 kubeadm.go:401] StartCluster: {Name:multinode-860915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-860915 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 22:04:15.645921  153142 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0321 22:04:15.661796  153142 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0321 22:04:15.668233  153142 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0321 22:04:15.668259  153142 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0321 22:04:15.668272  153142 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0321 22:04:15.668868  153142 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0321 22:04:15.675639  153142 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0321 22:04:15.675681  153142 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0321 22:04:15.681936  153142 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0321 22:04:15.681960  153142 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0321 22:04:15.681972  153142 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0321 22:04:15.681984  153142 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0321 22:04:15.682030  153142 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0321 22:04:15.682066  153142 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0321 22:04:15.719562  153142 kubeadm.go:322] W0321 22:04:15.718862    1403 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0321 22:04:15.719598  153142 command_runner.go:130] ! W0321 22:04:15.718862    1403 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0321 22:04:15.758204  153142 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1030-gcp\n", err: exit status 1
	I0321 22:04:15.758235  153142 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1030-gcp\n", err: exit status 1
	I0321 22:04:15.819087  153142 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0321 22:04:15.819116  153142 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0321 22:04:28.476596  153142 kubeadm.go:322] [init] Using Kubernetes version: v1.26.2
	I0321 22:04:28.476635  153142 command_runner.go:130] > [init] Using Kubernetes version: v1.26.2
	I0321 22:04:28.476721  153142 kubeadm.go:322] [preflight] Running pre-flight checks
	I0321 22:04:28.476736  153142 command_runner.go:130] > [preflight] Running pre-flight checks
	I0321 22:04:28.476843  153142 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0321 22:04:28.476856  153142 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0321 22:04:28.476929  153142 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1030-gcp
	I0321 22:04:28.476941  153142 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1030-gcp
	I0321 22:04:28.476990  153142 kubeadm.go:322] OS: Linux
	I0321 22:04:28.477017  153142 command_runner.go:130] > OS: Linux
	I0321 22:04:28.477116  153142 kubeadm.go:322] CGROUPS_CPU: enabled
	I0321 22:04:28.477131  153142 command_runner.go:130] > CGROUPS_CPU: enabled
	I0321 22:04:28.477203  153142 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0321 22:04:28.477221  153142 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0321 22:04:28.477295  153142 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0321 22:04:28.477305  153142 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0321 22:04:28.477376  153142 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0321 22:04:28.477386  153142 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0321 22:04:28.477448  153142 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0321 22:04:28.477458  153142 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0321 22:04:28.477537  153142 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0321 22:04:28.477547  153142 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0321 22:04:28.477616  153142 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0321 22:04:28.477631  153142 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0321 22:04:28.477712  153142 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0321 22:04:28.477761  153142 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0321 22:04:28.477845  153142 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0321 22:04:28.477857  153142 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0321 22:04:28.477995  153142 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0321 22:04:28.478009  153142 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0321 22:04:28.478141  153142 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0321 22:04:28.478154  153142 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0321 22:04:28.478261  153142 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0321 22:04:28.478273  153142 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0321 22:04:28.478358  153142 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0321 22:04:28.481197  153142 out.go:204]   - Generating certificates and keys ...
	I0321 22:04:28.478432  153142 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0321 22:04:28.481346  153142 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0321 22:04:28.481368  153142 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0321 22:04:28.481443  153142 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0321 22:04:28.481452  153142 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0321 22:04:28.481512  153142 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0321 22:04:28.481518  153142 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0321 22:04:28.481561  153142 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0321 22:04:28.481565  153142 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0321 22:04:28.481634  153142 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0321 22:04:28.481647  153142 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0321 22:04:28.481702  153142 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0321 22:04:28.481710  153142 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0321 22:04:28.481774  153142 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0321 22:04:28.481782  153142 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0321 22:04:28.481931  153142 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-860915] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0321 22:04:28.481938  153142 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-860915] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0321 22:04:28.482066  153142 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0321 22:04:28.482084  153142 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0321 22:04:28.482223  153142 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-860915] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0321 22:04:28.482230  153142 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-860915] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0321 22:04:28.482308  153142 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0321 22:04:28.482315  153142 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0321 22:04:28.482392  153142 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0321 22:04:28.482397  153142 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0321 22:04:28.482445  153142 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0321 22:04:28.482451  153142 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0321 22:04:28.482518  153142 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0321 22:04:28.482535  153142 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0321 22:04:28.482592  153142 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0321 22:04:28.482599  153142 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0321 22:04:28.482661  153142 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0321 22:04:28.482667  153142 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0321 22:04:28.482752  153142 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0321 22:04:28.482759  153142 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0321 22:04:28.482825  153142 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0321 22:04:28.482834  153142 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0321 22:04:28.482947  153142 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0321 22:04:28.482954  153142 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0321 22:04:28.483023  153142 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0321 22:04:28.483027  153142 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0321 22:04:28.483057  153142 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0321 22:04:28.483060  153142 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0321 22:04:28.483114  153142 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0321 22:04:28.484745  153142 out.go:204]   - Booting up control plane ...
	I0321 22:04:28.483246  153142 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0321 22:04:28.484864  153142 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0321 22:04:28.484883  153142 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0321 22:04:28.485006  153142 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0321 22:04:28.485015  153142 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0321 22:04:28.485112  153142 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0321 22:04:28.485137  153142 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0321 22:04:28.485266  153142 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0321 22:04:28.485281  153142 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0321 22:04:28.485440  153142 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0321 22:04:28.485447  153142 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0321 22:04:28.485519  153142 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.501800 seconds
	I0321 22:04:28.485526  153142 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.501800 seconds
	I0321 22:04:28.485658  153142 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0321 22:04:28.485669  153142 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0321 22:04:28.485820  153142 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0321 22:04:28.485830  153142 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0321 22:04:28.485896  153142 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0321 22:04:28.485902  153142 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0321 22:04:28.486113  153142 command_runner.go:130] > [mark-control-plane] Marking the node multinode-860915 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0321 22:04:28.486127  153142 kubeadm.go:322] [mark-control-plane] Marking the node multinode-860915 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0321 22:04:28.486185  153142 command_runner.go:130] > [bootstrap-token] Using token: sw9hi7.obyze4s7kes6ja14
	I0321 22:04:28.486194  153142 kubeadm.go:322] [bootstrap-token] Using token: sw9hi7.obyze4s7kes6ja14
	I0321 22:04:28.487538  153142 out.go:204]   - Configuring RBAC rules ...
	I0321 22:04:28.487641  153142 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0321 22:04:28.487660  153142 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0321 22:04:28.487755  153142 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0321 22:04:28.487765  153142 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0321 22:04:28.487976  153142 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0321 22:04:28.487993  153142 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0321 22:04:28.488159  153142 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0321 22:04:28.488169  153142 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0321 22:04:28.488316  153142 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0321 22:04:28.488326  153142 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0321 22:04:28.488468  153142 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0321 22:04:28.488482  153142 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0321 22:04:28.488608  153142 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0321 22:04:28.488615  153142 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0321 22:04:28.488654  153142 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0321 22:04:28.488661  153142 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0321 22:04:28.488701  153142 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0321 22:04:28.488707  153142 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0321 22:04:28.488711  153142 kubeadm.go:322] 
	I0321 22:04:28.488762  153142 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0321 22:04:28.488768  153142 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0321 22:04:28.488772  153142 kubeadm.go:322] 
	I0321 22:04:28.488871  153142 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0321 22:04:28.488883  153142 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0321 22:04:28.488893  153142 kubeadm.go:322] 
	I0321 22:04:28.488927  153142 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0321 22:04:28.488936  153142 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0321 22:04:28.488991  153142 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0321 22:04:28.489010  153142 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0321 22:04:28.489087  153142 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0321 22:04:28.489099  153142 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0321 22:04:28.489110  153142 kubeadm.go:322] 
	I0321 22:04:28.489179  153142 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0321 22:04:28.489188  153142 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0321 22:04:28.489199  153142 kubeadm.go:322] 
	I0321 22:04:28.489263  153142 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0321 22:04:28.489271  153142 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0321 22:04:28.489277  153142 kubeadm.go:322] 
	I0321 22:04:28.489349  153142 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0321 22:04:28.489358  153142 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0321 22:04:28.489463  153142 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0321 22:04:28.489483  153142 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0321 22:04:28.489585  153142 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0321 22:04:28.489597  153142 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0321 22:04:28.489603  153142 kubeadm.go:322] 
	I0321 22:04:28.489718  153142 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0321 22:04:28.489727  153142 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0321 22:04:28.489839  153142 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0321 22:04:28.489855  153142 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0321 22:04:28.489875  153142 kubeadm.go:322] 
	I0321 22:04:28.490031  153142 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token sw9hi7.obyze4s7kes6ja14 \
	I0321 22:04:28.490054  153142 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token sw9hi7.obyze4s7kes6ja14 \
	I0321 22:04:28.490198  153142 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:4d1226b8391a06c26001ed628fb6437347623d36214654a185910576da6ae050 \
	I0321 22:04:28.490213  153142 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d1226b8391a06c26001ed628fb6437347623d36214654a185910576da6ae050 \
	I0321 22:04:28.490245  153142 command_runner.go:130] > 	--control-plane 
	I0321 22:04:28.490254  153142 kubeadm.go:322] 	--control-plane 
	I0321 22:04:28.490265  153142 kubeadm.go:322] 
	I0321 22:04:28.490398  153142 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0321 22:04:28.490412  153142 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0321 22:04:28.490423  153142 kubeadm.go:322] 
	I0321 22:04:28.490539  153142 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token sw9hi7.obyze4s7kes6ja14 \
	I0321 22:04:28.490549  153142 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token sw9hi7.obyze4s7kes6ja14 \
	I0321 22:04:28.490676  153142 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:4d1226b8391a06c26001ed628fb6437347623d36214654a185910576da6ae050 
	I0321 22:04:28.490696  153142 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d1226b8391a06c26001ed628fb6437347623d36214654a185910576da6ae050 
	I0321 22:04:28.490705  153142 cni.go:84] Creating CNI manager for ""
	I0321 22:04:28.490722  153142 cni.go:136] 1 nodes found, recommending kindnet
	I0321 22:04:28.492327  153142 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0321 22:04:28.493570  153142 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0321 22:04:28.497140  153142 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0321 22:04:28.497156  153142 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0321 22:04:28.497161  153142 command_runner.go:130] > Device: 36h/54d	Inode: 1320614     Links: 1
	I0321 22:04:28.497167  153142 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0321 22:04:28.497172  153142 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0321 22:04:28.497177  153142 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0321 22:04:28.497182  153142 command_runner.go:130] > Change: 2023-03-21 21:49:52.361193928 +0000
	I0321 22:04:28.497186  153142 command_runner.go:130] >  Birth: -
	I0321 22:04:28.497296  153142 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
	I0321 22:04:28.497314  153142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0321 22:04:28.510389  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0321 22:04:29.287722  153142 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0321 22:04:29.293117  153142 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0321 22:04:29.298369  153142 command_runner.go:130] > serviceaccount/kindnet created
	I0321 22:04:29.306519  153142 command_runner.go:130] > daemonset.apps/kindnet created
	I0321 22:04:29.309659  153142 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0321 22:04:29.309717  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:29.309717  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=8b6238450160ebd3d5010da9938125282f0eedd4 minikube.k8s.io/name=multinode-860915 minikube.k8s.io/updated_at=2023_03_21T22_04_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:29.317099  153142 command_runner.go:130] > -16
	I0321 22:04:29.386841  153142 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0321 22:04:29.390290  153142 ops.go:34] apiserver oom_adj: -16
	I0321 22:04:29.403137  153142 command_runner.go:130] > node/multinode-860915 labeled
	I0321 22:04:29.403149  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:29.474378  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:29.977554  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:30.038206  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:30.477845  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:30.541280  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:30.977547  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:31.036257  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:31.477325  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:31.539623  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:31.977193  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:32.035636  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:32.477758  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:32.538581  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:32.977166  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:33.035944  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:33.477928  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:33.539580  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:33.977079  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:34.037511  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:34.477080  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:34.535859  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:34.977094  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:35.037239  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:35.477274  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:35.536247  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:35.977275  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:36.036638  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:36.477555  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:36.540393  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:36.976977  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:37.037109  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:37.476969  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:37.539953  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:37.977623  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:38.039986  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:38.477623  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:38.538182  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:38.977436  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:39.040478  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:39.477083  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:39.539394  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:39.976959  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:40.037867  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:40.477207  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:40.539605  153142 command_runner.go:130] > NAME      SECRETS   AGE
	I0321 22:04:40.539629  153142 command_runner.go:130] > default   0         0s
	I0321 22:04:40.542140  153142 kubeadm.go:1073] duration metric: took 11.232475575s to wait for elevateKubeSystemPrivileges.
	I0321 22:04:40.542168  153142 kubeadm.go:403] StartCluster complete in 24.89636851s
	I0321 22:04:40.542187  153142 settings.go:142] acquiring lock: {Name:mk64852ffcce32dfbe0aa61ac3d7147ea68ec4bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:40.542264  153142 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 22:04:40.543069  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/kubeconfig: {Name:mk5a118d4705650f833f938dc560fa34945ea156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:40.543271  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0321 22:04:40.543471  153142 config.go:182] Loaded profile config "multinode-860915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0321 22:04:40.543399  153142 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0321 22:04:40.543531  153142 addons.go:66] Setting storage-provisioner=true in profile "multinode-860915"
	I0321 22:04:40.543549  153142 addons.go:228] Setting addon storage-provisioner=true in "multinode-860915"
	I0321 22:04:40.543565  153142 addons.go:66] Setting default-storageclass=true in profile "multinode-860915"
	I0321 22:04:40.543582  153142 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-860915"
	I0321 22:04:40.543598  153142 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 22:04:40.543600  153142 host.go:66] Checking if "multinode-860915" exists ...
	I0321 22:04:40.543856  153142 kapi.go:59] client config for multinode-860915: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key", CAFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0321 22:04:40.543962  153142 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Status}}
	I0321 22:04:40.544170  153142 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Status}}
	I0321 22:04:40.544522  153142 cert_rotation.go:137] Starting client certificate rotation controller
	I0321 22:04:40.544771  153142 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0321 22:04:40.544788  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:40.544800  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:40.544810  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:40.554053  153142 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0321 22:04:40.554082  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:40.554092  153142 round_trippers.go:580]     Audit-Id: 94537c10-a339-43c1-9658-e21abadb1d1d
	I0321 22:04:40.554100  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:40.554109  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:40.554120  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:40.554133  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:40.554145  153142 round_trippers.go:580]     Content-Length: 291
	I0321 22:04:40.554157  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:40 GMT
	I0321 22:04:40.554185  153142 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7330eee3-984a-48af-9eda-cf12dc6be18f","resourceVersion":"230","creationTimestamp":"2023-03-21T22:04:28Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0321 22:04:40.554655  153142 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7330eee3-984a-48af-9eda-cf12dc6be18f","resourceVersion":"230","creationTimestamp":"2023-03-21T22:04:28Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0321 22:04:40.554717  153142 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0321 22:04:40.554730  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:40.554741  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:40.554753  153142 round_trippers.go:473]     Content-Type: application/json
	I0321 22:04:40.554766  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:40.560784  153142 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0321 22:04:40.560806  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:40.560816  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:40.560825  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:40.560838  153142 round_trippers.go:580]     Content-Length: 291
	I0321 22:04:40.560851  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:40 GMT
	I0321 22:04:40.560863  153142 round_trippers.go:580]     Audit-Id: 700caa40-41fd-4827-aedb-8f6680a0e182
	I0321 22:04:40.560875  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:40.560886  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:40.560915  153142 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7330eee3-984a-48af-9eda-cf12dc6be18f","resourceVersion":"302","creationTimestamp":"2023-03-21T22:04:28Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0321 22:04:40.622815  153142 command_runner.go:130] > apiVersion: v1
	I0321 22:04:40.622842  153142 command_runner.go:130] > data:
	I0321 22:04:40.622849  153142 command_runner.go:130] >   Corefile: |
	I0321 22:04:40.622856  153142 command_runner.go:130] >     .:53 {
	I0321 22:04:40.622862  153142 command_runner.go:130] >         errors
	I0321 22:04:40.622869  153142 command_runner.go:130] >         health {
	I0321 22:04:40.622877  153142 command_runner.go:130] >            lameduck 5s
	I0321 22:04:40.622883  153142 command_runner.go:130] >         }
	I0321 22:04:40.622889  153142 command_runner.go:130] >         ready
	I0321 22:04:40.622903  153142 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0321 22:04:40.622913  153142 command_runner.go:130] >            pods insecure
	I0321 22:04:40.622921  153142 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0321 22:04:40.622932  153142 command_runner.go:130] >            ttl 30
	I0321 22:04:40.622942  153142 command_runner.go:130] >         }
	I0321 22:04:40.622948  153142 command_runner.go:130] >         prometheus :9153
	I0321 22:04:40.622957  153142 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0321 22:04:40.622967  153142 command_runner.go:130] >            max_concurrent 1000
	I0321 22:04:40.622972  153142 command_runner.go:130] >         }
	I0321 22:04:40.622981  153142 command_runner.go:130] >         cache 30
	I0321 22:04:40.622993  153142 command_runner.go:130] >         loop
	I0321 22:04:40.623002  153142 command_runner.go:130] >         reload
	I0321 22:04:40.623008  153142 command_runner.go:130] >         loadbalance
	I0321 22:04:40.623016  153142 command_runner.go:130] >     }
	I0321 22:04:40.623022  153142 command_runner.go:130] > kind: ConfigMap
	I0321 22:04:40.623028  153142 command_runner.go:130] > metadata:
	I0321 22:04:40.623039  153142 command_runner.go:130] >   creationTimestamp: "2023-03-21T22:04:28Z"
	I0321 22:04:40.623048  153142 command_runner.go:130] >   name: coredns
	I0321 22:04:40.623061  153142 command_runner.go:130] >   namespace: kube-system
	I0321 22:04:40.623070  153142 command_runner.go:130] >   resourceVersion: "226"
	I0321 22:04:40.623077  153142 command_runner.go:130] >   uid: 28ab58e1-d4da-4e29-b7d3-1b4efa392be9
	I0321 22:04:40.623251  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0321 22:04:40.623966  153142 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 22:04:40.624252  153142 kapi.go:59] client config for multinode-860915: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key", CAFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0321 22:04:40.624623  153142 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0321 22:04:40.624637  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:40.624647  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:40.624657  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:40.627498  153142 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0321 22:04:40.626504  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:40.628832  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:40.628844  153142 round_trippers.go:580]     Content-Length: 109
	I0321 22:04:40.628853  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:40 GMT
	I0321 22:04:40.628875  153142 round_trippers.go:580]     Audit-Id: e6fcb386-d870-44ab-a01e-a2874f9c4158
	I0321 22:04:40.628887  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:40.628898  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:40.628907  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:40.628918  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:40.628939  153142 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"302"},"items":[]}
	I0321 22:04:40.628940  153142 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0321 22:04:40.629041  153142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0321 22:04:40.629092  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:40.629177  153142 addons.go:228] Setting addon default-storageclass=true in "multinode-860915"
	I0321 22:04:40.629216  153142 host.go:66] Checking if "multinode-860915" exists ...
	I0321 22:04:40.629579  153142 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Status}}
	I0321 22:04:40.726845  153142 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0321 22:04:40.726867  153142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0321 22:04:40.726910  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:40.729301  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:04:40.795872  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:04:40.881645  153142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0321 22:04:40.897232  153142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0321 22:04:40.904392  153142 command_runner.go:130] > configmap/coredns replaced
	I0321 22:04:40.904428  153142 start.go:921] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0321 22:04:41.061782  153142 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0321 22:04:41.061805  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:41.061813  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:41.061820  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:41.069905  153142 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0321 22:04:41.069952  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:41.069963  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:41.069974  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:41.069988  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:41.070003  153142 round_trippers.go:580]     Content-Length: 291
	I0321 22:04:41.070039  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:41 GMT
	I0321 22:04:41.070049  153142 round_trippers.go:580]     Audit-Id: 974bfdc4-c1e8-4e2c-8575-343407b3a2a6
	I0321 22:04:41.070081  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:41.070315  153142 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7330eee3-984a-48af-9eda-cf12dc6be18f","resourceVersion":"302","creationTimestamp":"2023-03-21T22:04:28Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0321 22:04:41.070437  153142 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-860915" context rescaled to 1 replicas
	I0321 22:04:41.070477  153142 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0321 22:04:41.072318  153142 out.go:177] * Verifying Kubernetes components...
	I0321 22:04:41.073758  153142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0321 22:04:41.414101  153142 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0321 22:04:41.414130  153142 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0321 22:04:41.414141  153142 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0321 22:04:41.414153  153142 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0321 22:04:41.414165  153142 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0321 22:04:41.414173  153142 command_runner.go:130] > pod/storage-provisioner created
	I0321 22:04:41.414226  153142 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0321 22:04:41.415734  153142 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0321 22:04:41.414770  153142 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 22:04:41.417076  153142 addons.go:499] enable addons completed in 873.682871ms: enabled=[storage-provisioner default-storageclass]
	I0321 22:04:41.417308  153142 kapi.go:59] client config for multinode-860915: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key", CAFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0321 22:04:41.417563  153142 node_ready.go:35] waiting up to 6m0s for node "multinode-860915" to be "Ready" ...
	I0321 22:04:41.417615  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:41.417623  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:41.417631  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:41.417639  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:41.419065  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:41.419084  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:41.419095  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:41.419104  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:41.419113  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:41 GMT
	I0321 22:04:41.419122  153142 round_trippers.go:580]     Audit-Id: bc468676-c445-4b0f-9b0c-16279d29ccb1
	I0321 22:04:41.419138  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:41.419145  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:41.419241  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:41.419924  153142 node_ready.go:49] node "multinode-860915" has status "Ready":"True"
	I0321 22:04:41.419939  153142 node_ready.go:38] duration metric: took 2.362614ms waiting for node "multinode-860915" to be "Ready" ...
	I0321 22:04:41.419949  153142 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0321 22:04:41.420041  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0321 22:04:41.420055  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:41.420067  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:41.420080  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:41.471249  153142 round_trippers.go:574] Response Status: 200 OK in 51 milliseconds
	I0321 22:04:41.471278  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:41.471290  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:41.471299  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:41.471312  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:41.471325  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:41.471333  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:41 GMT
	I0321 22:04:41.471347  153142 round_trippers.go:580]     Audit-Id: 4aecddd7-622c-4d24-85c1-7b842c2e77a4
	I0321 22:04:41.472300  153142 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"362"},"items":[{"metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0
ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 59099 chars]
	I0321 22:04:41.476444  153142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-69rb6" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:41.476560  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:41.476573  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:41.476584  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:41.476593  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:41.481065  153142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0321 22:04:41.481086  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:41.481097  153142 round_trippers.go:580]     Audit-Id: 835f4f99-3c7a-4581-9df7-641847d78a6c
	I0321 22:04:41.481114  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:41.481123  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:41.481134  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:41.481157  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:41.481174  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:41 GMT
	I0321 22:04:41.481300  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0321 22:04:41.481799  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:41.481817  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:41.481828  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:41.481839  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:41.483559  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:41.483581  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:41.483590  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:41.483600  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:41 GMT
	I0321 22:04:41.483614  153142 round_trippers.go:580]     Audit-Id: 5442dd18-f9f9-4ba9-85d2-4067cd867b45
	I0321 22:04:41.483631  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:41.483643  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:41.483656  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:41.483916  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:41.985065  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:41.985087  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:41.985097  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:41.985106  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:41.987508  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:41.987536  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:41.987549  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:41.987559  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:41.987568  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:41 GMT
	I0321 22:04:41.987582  153142 round_trippers.go:580]     Audit-Id: a5c17970-d993-4042-a90e-17a4a80f8b73
	I0321 22:04:41.987591  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:41.987607  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:41.987747  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0321 22:04:41.988379  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:41.988400  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:41.988412  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:41.988422  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:41.990384  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:41.990403  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:41.990413  153142 round_trippers.go:580]     Audit-Id: 3423137d-8bc4-4b92-af0b-b0699e8ebc83
	I0321 22:04:41.990422  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:41.990431  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:41.990440  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:41.990446  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:41.990452  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:41 GMT
	I0321 22:04:41.990580  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:42.484654  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:42.484741  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:42.484754  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:42.484765  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:42.487170  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:42.487197  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:42.487207  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:42 GMT
	I0321 22:04:42.487216  153142 round_trippers.go:580]     Audit-Id: 45515adb-105d-484f-a700-37e51a2f83bd
	I0321 22:04:42.487227  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:42.487242  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:42.487251  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:42.487260  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:42.487827  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0321 22:04:42.488606  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:42.488618  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:42.488628  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:42.488639  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:42.493959  153142 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0321 22:04:42.493986  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:42.493997  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:42.494006  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:42.494034  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:42 GMT
	I0321 22:04:42.494043  153142 round_trippers.go:580]     Audit-Id: cb9d1204-3f7b-4c8f-879f-c19462724a0f
	I0321 22:04:42.494052  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:42.494065  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:42.494396  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:42.984534  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:42.984570  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:42.984584  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:42.984593  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:42.987075  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:42.987103  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:42.987114  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:42.987124  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:42.987133  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:42 GMT
	I0321 22:04:42.987142  153142 round_trippers.go:580]     Audit-Id: 5bb27663-5f2f-4e2c-a7af-dd5af233cc5f
	I0321 22:04:42.987156  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:42.987166  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:42.987317  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0321 22:04:42.987988  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:42.988008  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:42.988020  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:42.988030  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:42.989827  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:42.989844  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:42.989850  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:42.989856  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:42.989861  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:42.989868  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:42 GMT
	I0321 22:04:42.989874  153142 round_trippers.go:580]     Audit-Id: cff74ebb-2dbc-417d-8458-d3b1d7fc8cd0
	I0321 22:04:42.989880  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:42.990180  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:43.485298  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:43.485322  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:43.485334  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:43.485343  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:43.487933  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:43.487957  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:43.487968  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:43.487977  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:43.487988  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:43 GMT
	I0321 22:04:43.487999  153142 round_trippers.go:580]     Audit-Id: 7fbfa791-c360-43fa-a9b1-4dff800b2c97
	I0321 22:04:43.488014  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:43.488025  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:43.488165  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0321 22:04:43.488718  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:43.488734  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:43.488745  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:43.488756  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:43.490735  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:43.490755  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:43.490764  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:43.490778  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:43 GMT
	I0321 22:04:43.490786  153142 round_trippers.go:580]     Audit-Id: af1982b1-d399-4e09-a876-6f47f9450f12
	I0321 22:04:43.490797  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:43.490807  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:43.490820  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:43.490974  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:43.491310  153142 pod_ready.go:102] pod "coredns-787d4945fb-69rb6" in "kube-system" namespace has status "Ready":"False"
	I0321 22:04:43.984844  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:43.984866  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:43.984874  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:43.984880  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:43.987069  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:43.987086  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:43.987093  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:43.987099  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:43.987105  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:43.987114  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:43 GMT
	I0321 22:04:43.987122  153142 round_trippers.go:580]     Audit-Id: 423e1679-0b17-4065-852a-68348efd8d58
	I0321 22:04:43.987137  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:43.987243  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0321 22:04:43.987733  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:43.987749  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:43.987756  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:43.987762  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:43.989553  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:43.989571  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:43.989581  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:43.989591  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:43.989600  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:43.989609  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:43 GMT
	I0321 22:04:43.989627  153142 round_trippers.go:580]     Audit-Id: 776b794c-a4c8-4429-bc17-f6e0daa2f9d5
	I0321 22:04:43.989635  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:43.989751  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:44.485414  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:44.485433  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:44.485441  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:44.485448  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:44.487822  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:44.487848  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:44.487859  153142 round_trippers.go:580]     Audit-Id: 0ea983ef-43a1-44c8-a252-49c34a03672d
	I0321 22:04:44.487867  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:44.487872  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:44.487881  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:44.487886  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:44.487894  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:44 GMT
	I0321 22:04:44.487979  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0321 22:04:44.488408  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:44.488422  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:44.488429  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:44.488435  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:44.490115  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:44.490138  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:44.490149  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:44 GMT
	I0321 22:04:44.490158  153142 round_trippers.go:580]     Audit-Id: 7d879d9a-81b6-403e-9e6f-de57a3b89a65
	I0321 22:04:44.490168  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:44.490177  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:44.490190  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:44.490202  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:44.490323  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:44.984982  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:44.985004  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:44.985015  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:44.985024  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:44.987072  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:44.987097  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:44.987108  153142 round_trippers.go:580]     Audit-Id: ce9c3f19-1b07-458a-8b8b-415b29750249
	I0321 22:04:44.987117  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:44.987125  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:44.987133  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:44.987140  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:44.987156  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:44 GMT
	I0321 22:04:44.987263  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:44.987705  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:44.987718  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:44.987725  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:44.987731  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:44.989406  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:44.989425  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:44.989435  153142 round_trippers.go:580]     Audit-Id: c86022db-5709-4d5f-b642-979a6aa6ccfe
	I0321 22:04:44.989443  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:44.989453  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:44.989466  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:44.989473  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:44.989481  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:44 GMT
	I0321 22:04:44.989563  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:45.485168  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:45.485188  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:45.485196  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:45.485203  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:45.487268  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:45.487291  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:45.487301  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:45 GMT
	I0321 22:04:45.487310  153142 round_trippers.go:580]     Audit-Id: 53a5f3cf-2c4a-4567-9fef-c1d7d48ef1af
	I0321 22:04:45.487322  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:45.487331  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:45.487343  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:45.487355  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:45.487445  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:45.487906  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:45.487918  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:45.487925  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:45.487931  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:45.489688  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:45.489704  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:45.489712  153142 round_trippers.go:580]     Audit-Id: 679af3ba-fdff-43ac-8e1f-9643de53adbd
	I0321 22:04:45.489720  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:45.489728  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:45.489737  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:45.489748  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:45.489754  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:45 GMT
	I0321 22:04:45.489878  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:45.984461  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:45.984480  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:45.984488  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:45.984498  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:45.986473  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:45.986493  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:45.986502  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:45.986510  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:45.986519  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:45.986527  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:45 GMT
	I0321 22:04:45.986538  153142 round_trippers.go:580]     Audit-Id: 1541d6db-ca52-4f82-a422-ca3d5b36e145
	I0321 22:04:45.986552  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:45.986699  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:45.987214  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:45.987229  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:45.987240  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:45.987250  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:45.988837  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:45.988858  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:45.988865  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:45.988870  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:45.988876  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:45 GMT
	I0321 22:04:45.988885  153142 round_trippers.go:580]     Audit-Id: 6a9e4627-72e7-4a03-aed4-579b657db522
	I0321 22:04:45.988894  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:45.988912  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:45.989030  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:45.989330  153142 pod_ready.go:102] pod "coredns-787d4945fb-69rb6" in "kube-system" namespace has status "Ready":"False"
	I0321 22:04:46.484547  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:46.484569  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:46.484583  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:46.484592  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:46.486663  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:46.486684  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:46.486691  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:46.486697  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:46 GMT
	I0321 22:04:46.486706  153142 round_trippers.go:580]     Audit-Id: c0df069f-6b50-489f-a263-192839f1664c
	I0321 22:04:46.486714  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:46.486728  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:46.486739  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:46.486839  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:46.487280  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:46.487294  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:46.487301  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:46.487310  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:46.488878  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:46.488899  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:46.488908  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:46.488916  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:46.488925  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:46 GMT
	I0321 22:04:46.488937  153142 round_trippers.go:580]     Audit-Id: 3e5046fe-02fc-45e8-b272-240a4c3d24b8
	I0321 22:04:46.488947  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:46.488959  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:46.489077  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:46.985269  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:46.985296  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:46.985308  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:46.985317  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:46.987574  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:46.987596  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:46.987607  153142 round_trippers.go:580]     Audit-Id: 35188258-bffd-4578-af44-aaf85090ad7c
	I0321 22:04:46.987616  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:46.987625  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:46.987634  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:46.987645  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:46.987655  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:46 GMT
	I0321 22:04:46.987787  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:46.988257  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:46.988270  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:46.988277  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:46.988284  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:46.989956  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:46.989980  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:46.989990  153142 round_trippers.go:580]     Audit-Id: 5a0d1916-53d7-4cf0-bfda-0b8235e1b36b
	I0321 22:04:46.990000  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:46.990009  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:46.990044  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:46.990058  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:46.990070  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:46 GMT
	I0321 22:04:46.990161  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:47.484717  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:47.484738  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:47.484746  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:47.484752  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:47.487128  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:47.487153  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:47.487164  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:47.487172  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:47 GMT
	I0321 22:04:47.487178  153142 round_trippers.go:580]     Audit-Id: d0d8efbb-284f-4e97-9e48-2c3fb565343a
	I0321 22:04:47.487183  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:47.487189  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:47.487196  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:47.487294  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:47.487817  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:47.487835  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:47.487846  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:47.487856  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:47.489815  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:47.489837  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:47.489844  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:47.489850  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:47 GMT
	I0321 22:04:47.489856  153142 round_trippers.go:580]     Audit-Id: 0b02e5eb-0669-4053-b1d0-3fa9aa788bfa
	I0321 22:04:47.489861  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:47.489867  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:47.489872  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:47.490049  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:47.984610  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:47.984638  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:47.984650  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:47.984660  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:47.987238  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:47.987263  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:47.987274  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:47.987284  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:47.987296  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:47.987307  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:47.987325  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:47 GMT
	I0321 22:04:47.987338  153142 round_trippers.go:580]     Audit-Id: 18e51c12-f396-4b7d-aeff-772acf914b86
	I0321 22:04:47.987465  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:47.988058  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:47.988077  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:47.988089  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:47.988100  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:47.990007  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:47.990046  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:47.990056  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:47 GMT
	I0321 22:04:47.990064  153142 round_trippers.go:580]     Audit-Id: 9dc070e3-8f3e-4e6f-8476-b77a7ca7d0da
	I0321 22:04:47.990072  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:47.990086  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:47.990095  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:47.990107  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:47.990208  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:47.990578  153142 pod_ready.go:102] pod "coredns-787d4945fb-69rb6" in "kube-system" namespace has status "Ready":"False"
	I0321 22:04:48.484479  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:48.484505  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:48.484517  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:48.484527  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:48.487382  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:48.487403  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:48.487424  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:48.487434  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:48.487446  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:48 GMT
	I0321 22:04:48.487455  153142 round_trippers.go:580]     Audit-Id: 55cbbb31-3c8b-4aa9-ba97-7551344bc56b
	I0321 22:04:48.487464  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:48.487473  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:48.487595  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:48.488145  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:48.488161  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:48.488175  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:48.488186  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:48.490165  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:48.490183  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:48.490189  153142 round_trippers.go:580]     Audit-Id: 9c37da4d-c305-465e-a8ab-80b9ca498bf9
	I0321 22:04:48.490195  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:48.490201  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:48.490209  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:48.490218  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:48.490227  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:48 GMT
	I0321 22:04:48.490361  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:48.985150  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:48.985172  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:48.985181  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:48.985187  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:48.987939  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:48.987969  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:48.987981  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:48.987991  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:48.988000  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:48.988009  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:48.988018  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:48 GMT
	I0321 22:04:48.988031  153142 round_trippers.go:580]     Audit-Id: 7efa780b-8a64-4a08-84f9-1dfa59a8c533
	I0321 22:04:48.988163  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:48.988794  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:48.988811  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:48.988822  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:48.988832  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:48.990981  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:48.991003  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:48.991012  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:48.991020  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:48 GMT
	I0321 22:04:48.991029  153142 round_trippers.go:580]     Audit-Id: c26bf01b-5c12-45fd-847b-11cb3a88329d
	I0321 22:04:48.991039  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:48.991055  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:48.991065  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:48.991197  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:49.484727  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:49.484749  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:49.484760  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:49.484772  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:49.487426  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:49.487457  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:49.487467  153142 round_trippers.go:580]     Audit-Id: 8c042c27-08c3-4471-a73e-d62bf819701c
	I0321 22:04:49.487475  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:49.487483  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:49.487491  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:49.487499  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:49.487507  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:49 GMT
	I0321 22:04:49.487664  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:49.488236  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:49.488248  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:49.488258  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:49.488267  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:49.490494  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:49.490519  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:49.490530  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:49.490540  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:49.490549  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:49.490559  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:49.490572  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:49 GMT
	I0321 22:04:49.490581  153142 round_trippers.go:580]     Audit-Id: 9338d5e3-9be2-4f3b-8ce3-fb786608a8f8
	I0321 22:04:49.490735  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:49.985216  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:49.985242  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:49.985253  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:49.985263  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:49.987930  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:49.988003  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:49.988019  153142 round_trippers.go:580]     Audit-Id: 73d1a6ac-d31e-4005-a0a3-799ab3fbdce5
	I0321 22:04:49.988029  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:49.988039  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:49.988051  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:49.988064  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:49.988074  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:49 GMT
	I0321 22:04:49.988222  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:49.988803  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:49.988819  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:49.988831  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:49.988841  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:49.990946  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:49.990970  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:49.990981  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:49.990990  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:49.991000  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:49.991012  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:49 GMT
	I0321 22:04:49.991024  153142 round_trippers.go:580]     Audit-Id: c6e8d7cc-14e9-48df-9777-23b132fa4ab3
	I0321 22:04:49.991036  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:49.991135  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:49.991543  153142 pod_ready.go:102] pod "coredns-787d4945fb-69rb6" in "kube-system" namespace has status "Ready":"False"
	I0321 22:04:50.484484  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:50.484503  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:50.484511  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:50.484518  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:50.487059  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:50.487084  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:50.487095  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:50.487104  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:50.487113  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:50.487122  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:50 GMT
	I0321 22:04:50.487136  153142 round_trippers.go:580]     Audit-Id: a2836ba2-e44d-48b8-95be-a52078ebec5b
	I0321 22:04:50.487145  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:50.487244  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:50.487702  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:50.487718  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:50.487725  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:50.487731  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:50.489972  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:50.490005  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:50.490025  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:50.490038  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:50.490058  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:50.490078  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:50 GMT
	I0321 22:04:50.490088  153142 round_trippers.go:580]     Audit-Id: 42a0a8e7-5cc8-4491-9b4d-a14368a49b14
	I0321 22:04:50.490100  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:50.490291  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:50.984504  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:50.984522  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:50.984530  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:50.984538  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:50.987243  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:50.987269  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:50.987280  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:50.987290  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:50.987299  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:50 GMT
	I0321 22:04:50.987312  153142 round_trippers.go:580]     Audit-Id: 2f355af5-66af-4449-a6a8-32e1072c7cfa
	I0321 22:04:50.987321  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:50.987330  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:50.987472  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:50.987959  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:50.987975  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:50.987982  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:50.987988  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:50.989939  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:50.989971  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:50.989983  153142 round_trippers.go:580]     Audit-Id: 63aad23f-db87-43dd-b112-c10bd8717929
	I0321 22:04:50.989993  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:50.990009  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:50.990039  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:50.990050  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:50.990062  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:50 GMT
	I0321 22:04:50.990236  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:51.485328  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:51.485348  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:51.485356  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:51.485363  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:51.490845  153142 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0321 22:04:51.490874  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:51.490885  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:51.490894  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:51.490903  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:51.490913  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:51 GMT
	I0321 22:04:51.490922  153142 round_trippers.go:580]     Audit-Id: 4a0d7665-4e97-40af-a95e-67c83acbf5ec
	I0321 22:04:51.490937  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:51.491061  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:51.491647  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:51.491677  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:51.491688  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:51.491698  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:51.493804  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:51.493824  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:51.493835  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:51.493845  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:51.493862  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:51.493871  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:51 GMT
	I0321 22:04:51.493880  153142 round_trippers.go:580]     Audit-Id: 5f400417-a130-4f19-8974-e9a8155b2f2b
	I0321 22:04:51.493891  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:51.494095  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:51.984516  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:51.984545  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:51.984562  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:51.984568  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:51.987241  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:51.987274  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:51.987286  153142 round_trippers.go:580]     Audit-Id: 85c00281-e19c-4c04-829f-af5273ac03d4
	I0321 22:04:51.987296  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:51.987309  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:51.987322  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:51.987338  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:51.987350  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:51 GMT
	I0321 22:04:51.987478  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:51.988047  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:51.988064  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:51.988075  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:51.988084  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:51.990167  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:51.990190  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:51.990202  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:51 GMT
	I0321 22:04:51.990212  153142 round_trippers.go:580]     Audit-Id: 024c87eb-02f9-4e75-945b-4da1264e3a70
	I0321 22:04:51.990221  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:51.990229  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:51.990239  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:51.990248  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:51.990426  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:52.484705  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:52.484732  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:52.484744  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:52.484754  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:52.487323  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:52.487351  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:52.487360  153142 round_trippers.go:580]     Audit-Id: 9d068a19-3ca9-4e48-a003-ad80e5e52d39
	I0321 22:04:52.487370  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:52.487379  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:52.487393  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:52.487402  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:52.487415  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:52 GMT
	I0321 22:04:52.487557  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:52.488127  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:52.488148  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:52.488159  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:52.488169  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:52.490376  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:52.490399  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:52.490409  153142 round_trippers.go:580]     Audit-Id: 75ac35a9-baa0-4566-8316-5ac22944b8d3
	I0321 22:04:52.490418  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:52.490428  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:52.490449  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:52.490462  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:52.490474  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:52 GMT
	I0321 22:04:52.490606  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:52.490985  153142 pod_ready.go:102] pod "coredns-787d4945fb-69rb6" in "kube-system" namespace has status "Ready":"False"
	I0321 22:04:52.985189  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:52.985216  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:52.985228  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:52.985239  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:52.987644  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:52.987670  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:52.987680  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:52.987689  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:52 GMT
	I0321 22:04:52.987698  153142 round_trippers.go:580]     Audit-Id: b784c229-40e8-4d5a-915b-af8bae3e795c
	I0321 22:04:52.987708  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:52.987719  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:52.987732  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:52.987872  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:52.988462  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:52.988483  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:52.988494  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:52.988504  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:52.990673  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:52.990694  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:52.990704  153142 round_trippers.go:580]     Audit-Id: 30590a51-be0a-4c1f-abfc-46a8dde6413d
	I0321 22:04:52.990714  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:52.990721  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:52.990731  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:52.990744  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:52.990756  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:52 GMT
	I0321 22:04:52.990896  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:53.484522  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:53.484541  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:53.484549  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:53.484556  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:53.487119  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:53.487144  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:53.487156  153142 round_trippers.go:580]     Audit-Id: 9bdd6514-e0de-4b23-b459-4c71be70ee81
	I0321 22:04:53.487166  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:53.487184  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:53.487197  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:53.487207  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:53.487220  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:53 GMT
	I0321 22:04:53.487339  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:53.487875  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:53.487891  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:53.487902  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:53.487912  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:53.489891  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:53.489912  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:53.489921  153142 round_trippers.go:580]     Audit-Id: 6e7602cb-ba99-4dbf-a4f9-a54d85fd99a1
	I0321 22:04:53.489930  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:53.489938  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:53.489947  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:53.489960  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:53.489973  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:53 GMT
	I0321 22:04:53.490093  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:53.985066  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:53.985093  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:53.985106  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:53.985116  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:53.987659  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:53.987689  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:53.987701  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:53.987711  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:53.987726  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:53 GMT
	I0321 22:04:53.987739  153142 round_trippers.go:580]     Audit-Id: a4f97a4e-806c-43e9-8c32-6fc34126ddec
	I0321 22:04:53.987752  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:53.987764  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:53.987893  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:53.988461  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:53.988477  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:53.988489  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:53.988503  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:53.990362  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:53.990385  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:53.990396  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:53.990405  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:53.990418  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:53.990431  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:53 GMT
	I0321 22:04:53.990447  153142 round_trippers.go:580]     Audit-Id: 1ad08d2a-8347-4673-9693-79f4c28456a9
	I0321 22:04:53.990457  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:53.990618  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:54.485231  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:54.485250  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:54.485258  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:54.485264  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:54.487711  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:54.487733  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:54.487744  153142 round_trippers.go:580]     Audit-Id: 077a7806-8ab8-484f-bcf2-3b5f72e9bb30
	I0321 22:04:54.487753  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:54.487762  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:54.487793  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:54.487807  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:54.487817  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:54 GMT
	I0321 22:04:54.487938  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:54.488489  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:54.488503  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:54.488515  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:54.488533  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:54.490447  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:54.490472  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:54.490482  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:54 GMT
	I0321 22:04:54.490490  153142 round_trippers.go:580]     Audit-Id: f0176315-dd9e-4888-a1e2-8b323df0d5fb
	I0321 22:04:54.490499  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:54.490506  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:54.490542  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:54.490557  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:54.490664  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:54.984955  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:54.984980  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:54.984992  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:54.985002  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:54.987700  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:54.987729  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:54.987739  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:54.987748  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:54.987758  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:54 GMT
	I0321 22:04:54.987767  153142 round_trippers.go:580]     Audit-Id: 30650966-37ff-4687-a55a-66178a417e62
	I0321 22:04:54.987777  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:54.987791  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:54.987919  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:54.988537  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:54.988555  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:54.988566  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:54.988576  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:54.990641  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:54.990666  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:54.990678  153142 round_trippers.go:580]     Audit-Id: 7ab860df-8080-4bd8-9716-abcb5f5a8778
	I0321 22:04:54.990688  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:54.990698  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:54.990709  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:54.990727  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:54.990740  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:54 GMT
	I0321 22:04:54.990876  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:54.991289  153142 pod_ready.go:102] pod "coredns-787d4945fb-69rb6" in "kube-system" namespace has status "Ready":"False"
	I0321 22:04:55.485479  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:55.485501  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:55.485513  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:55.485523  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:55.487972  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:55.487997  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:55.488008  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:55 GMT
	I0321 22:04:55.488018  153142 round_trippers.go:580]     Audit-Id: 33f3d116-9d25-403a-9c68-d12d49b569c4
	I0321 22:04:55.488034  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:55.488043  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:55.488053  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:55.488068  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:55.488181  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:55.488786  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:55.488806  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:55.488818  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:55.488828  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:55.490808  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:55.490836  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:55.490847  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:55.490862  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:55 GMT
	I0321 22:04:55.490876  153142 round_trippers.go:580]     Audit-Id: 40475a74-d1ed-4874-a6a5-d399a29cfab9
	I0321 22:04:55.490886  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:55.490921  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:55.490935  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:55.491068  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:55.984570  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:55.984598  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:55.984611  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:55.984621  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:55.986803  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:55.986831  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:55.986843  153142 round_trippers.go:580]     Audit-Id: 100be0d4-0e62-437e-a7bc-42c3a1f9f34b
	I0321 22:04:55.986853  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:55.986862  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:55.986871  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:55.986881  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:55.986893  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:55 GMT
	I0321 22:04:55.987019  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:55.987573  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:55.987591  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:55.987603  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:55.987613  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:55.989411  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:55.989430  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:55.989441  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:55.989450  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:55.989458  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:55.989468  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:55.989486  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:55 GMT
	I0321 22:04:55.989500  153142 round_trippers.go:580]     Audit-Id: 6e27a4ea-aed8-4695-9bce-a7c0bbf0c912
	I0321 22:04:55.989616  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:56.485270  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:56.485291  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:56.485299  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:56.485306  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:56.487092  153142 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0321 22:04:56.487116  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:56.487127  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:56.487137  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:56.487146  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:56.487153  153142 round_trippers.go:580]     Content-Length: 216
	I0321 22:04:56.487160  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:56 GMT
	I0321 22:04:56.487165  153142 round_trippers.go:580]     Audit-Id: 2013cc6d-c930-4fad-9338-458e2389f31b
	I0321 22:04:56.487174  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:56.487195  153142 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-69rb6\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-69rb6","kind":"pods"},"code":404}
	I0321 22:04:56.487358  153142 pod_ready.go:97] error getting pod "coredns-787d4945fb-69rb6" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-69rb6" not found
	I0321 22:04:56.487377  153142 pod_ready.go:81] duration metric: took 15.010875895s waiting for pod "coredns-787d4945fb-69rb6" in "kube-system" namespace to be "Ready" ...
	E0321 22:04:56.487385  153142 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-69rb6" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-69rb6" not found
	I0321 22:04:56.487394  153142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-wx8p9" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:56.487434  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-wx8p9
	I0321 22:04:56.487441  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:56.487449  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:56.487455  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:56.489310  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:56.489333  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:56.489344  153142 round_trippers.go:580]     Audit-Id: 038a7ccb-6a2c-48b1-8c92-baa1b587466c
	I0321 22:04:56.489353  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:56.489363  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:56.489375  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:56.489382  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:56.489390  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:56 GMT
	I0321 22:04:56.489542  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-wx8p9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8b510dd8-761f-469a-8ccc-d08beb282e56","resourceVersion":"415","creationTimestamp":"2023-03-21T22:04:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6380 chars]
	I0321 22:04:56.489944  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:56.489955  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:56.489962  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:56.489968  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:56.491489  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:56.491505  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:56.491513  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:56.491520  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:56.491529  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:56.491541  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:56 GMT
	I0321 22:04:56.491556  153142 round_trippers.go:580]     Audit-Id: eb821716-8c1c-4419-9765-e6b994cc69ba
	I0321 22:04:56.491565  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:56.491665  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:56.992663  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-wx8p9
	I0321 22:04:56.992685  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:56.992696  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:56.992705  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:56.994798  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:56.994818  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:56.994825  153142 round_trippers.go:580]     Audit-Id: 1de2fca9-5bd7-4829-9c38-cf9c57d5e3ba
	I0321 22:04:56.994831  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:56.994837  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:56.994842  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:56.994848  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:56.994853  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:56 GMT
	I0321 22:04:56.994939  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-wx8p9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8b510dd8-761f-469a-8ccc-d08beb282e56","resourceVersion":"415","creationTimestamp":"2023-03-21T22:04:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6380 chars]
	I0321 22:04:56.995364  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:56.995377  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:56.995384  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:56.995390  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:56.997003  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:56.997030  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:56.997040  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:56.997049  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:56.997062  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:56.997071  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:56.997083  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:56 GMT
	I0321 22:04:56.997093  153142 round_trippers.go:580]     Audit-Id: a296f251-b979-45b9-baee-2bdfa9c2e650
	I0321 22:04:56.997202  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:57.492730  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-wx8p9
	I0321 22:04:57.492752  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.492765  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.492775  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.494987  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:57.495006  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.495013  153142 round_trippers.go:580]     Audit-Id: 8b043704-ab42-457f-9664-f91ff2eccbf9
	I0321 22:04:57.495019  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.495025  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.495034  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.495043  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.495056  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.495147  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-wx8p9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8b510dd8-761f-469a-8ccc-d08beb282e56","resourceVersion":"425","creationTimestamp":"2023-03-21T22:04:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0321 22:04:57.495595  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:57.495609  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.495616  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.495624  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.497423  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.497442  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.497450  153142 round_trippers.go:580]     Audit-Id: 578befb9-85b7-47dd-968b-c9e1c5120c07
	I0321 22:04:57.497459  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.497468  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.497479  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.497492  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.497506  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.497613  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:57.497894  153142 pod_ready.go:92] pod "coredns-787d4945fb-wx8p9" in "kube-system" namespace has status "Ready":"True"
	I0321 22:04:57.497916  153142 pod_ready.go:81] duration metric: took 1.010513718s waiting for pod "coredns-787d4945fb-wx8p9" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.497930  153142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.497982  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-860915
	I0321 22:04:57.497991  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.498002  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.498033  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.499716  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.499735  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.499744  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.499753  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.499762  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.499775  153142 round_trippers.go:580]     Audit-Id: f082e3ef-31c2-49ba-b3e3-82bd3a99c8a1
	I0321 22:04:57.499788  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.499804  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.499898  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-860915","namespace":"kube-system","uid":"8e7f12bc-76cf-4d36-8034-4dadd4ba1c5b","resourceVersion":"277","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"456cd41a0496a9e8bf278e639437f566","kubernetes.io/config.mirror":"456cd41a0496a9e8bf278e639437f566","kubernetes.io/config.seen":"2023-03-21T22:04:28.326473783Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0321 22:04:57.500251  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:57.500263  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.500270  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.500276  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.501635  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.501655  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.501666  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.501675  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.501681  153142 round_trippers.go:580]     Audit-Id: efdf7d07-e65e-44d9-9fc0-746dffffd179
	I0321 22:04:57.501687  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.501692  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.501698  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.501829  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:57.502151  153142 pod_ready.go:92] pod "etcd-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:04:57.502164  153142 pod_ready.go:81] duration metric: took 4.223502ms waiting for pod "etcd-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.502180  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.502232  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-860915
	I0321 22:04:57.502244  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.502257  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.502270  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.503722  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.503738  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.503748  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.503758  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.503814  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.503829  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.503841  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.503851  153142 round_trippers.go:580]     Audit-Id: e2284b40-f351-41f2-9dbc-e557243becf1
	I0321 22:04:57.503972  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-860915","namespace":"kube-system","uid":"1f990298-d202-4148-ac4a-b5f713f9fd83","resourceVersion":"274","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"322dc81533eb2822b571df496b71ca36","kubernetes.io/config.mirror":"322dc81533eb2822b571df496b71ca36","kubernetes.io/config.seen":"2023-03-21T22:04:28.326475235Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0321 22:04:57.504334  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:57.504348  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.504358  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.504378  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.505599  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.505615  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.505624  153142 round_trippers.go:580]     Audit-Id: e91ec303-f5ce-4b8e-97fc-50c8112c5014
	I0321 22:04:57.505633  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.505641  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.505650  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.505662  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.505673  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.505733  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:57.506052  153142 pod_ready.go:92] pod "kube-apiserver-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:04:57.506064  153142 pod_ready.go:81] duration metric: took 3.875533ms waiting for pod "kube-apiserver-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.506074  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.506120  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-860915
	I0321 22:04:57.506130  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.506141  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.506153  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.507480  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.507494  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.507500  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.507506  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.507511  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.507517  153142 round_trippers.go:580]     Audit-Id: fa32c58d-12fb-41c4-9acb-6e0923b5488d
	I0321 22:04:57.507523  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.507532  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.507618  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-860915","namespace":"kube-system","uid":"6c6a8cd8-e27e-40e9-910f-c3d9b56c6882","resourceVersion":"391","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70cebb20d8a8aec3d68c354124b20828","kubernetes.io/config.mirror":"70cebb20d8a8aec3d68c354124b20828","kubernetes.io/config.seen":"2023-03-21T22:04:28.326453711Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0321 22:04:57.507960  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:57.507973  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.507980  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.507987  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.509186  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.509200  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.509207  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.509213  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.509222  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.509233  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.509244  153142 round_trippers.go:580]     Audit-Id: b2fceeee-3672-44ad-9485-59de1d5be06c
	I0321 22:04:57.509250  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.509329  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:57.509555  153142 pod_ready.go:92] pod "kube-controller-manager-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:04:57.509564  153142 pod_ready.go:81] duration metric: took 3.484348ms waiting for pod "kube-controller-manager-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.509573  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.509603  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97hnd
	I0321 22:04:57.509610  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.509618  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.509624  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.510949  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.510968  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.510978  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.510987  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.510995  153142 round_trippers.go:580]     Audit-Id: 10044fd8-5914-4468-b352-0eff97ad19b9
	I0321 22:04:57.511011  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.511020  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.511032  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.511129  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-97hnd","generateName":"kube-proxy-","namespace":"kube-system","uid":"a92d55d8-3ec3-4e8e-b31c-f24fcb440600","resourceVersion":"382","creationTimestamp":"2023-03-21T22:04:40Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cda4f06c-b817-4058-b4d0-e429752e2f27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cda4f06c-b817-4058-b4d0-e429752e2f27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0321 22:04:57.511477  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:57.511489  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.511496  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.511504  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.512720  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.512734  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.512741  153142 round_trippers.go:580]     Audit-Id: 62739c95-836d-4760-b04b-48f43d2bcd47
	I0321 22:04:57.512746  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.512752  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.512757  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.512762  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.512770  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.512850  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:57.513078  153142 pod_ready.go:92] pod "kube-proxy-97hnd" in "kube-system" namespace has status "Ready":"True"
	I0321 22:04:57.513088  153142 pod_ready.go:81] duration metric: took 3.510613ms waiting for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.513096  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.693467  153142 request.go:622] Waited for 180.321576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-860915
	I0321 22:04:57.693532  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-860915
	I0321 22:04:57.693539  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.693551  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.693565  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.695637  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:57.695657  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.695665  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.695670  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.695676  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.695682  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.695687  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.695694  153142 round_trippers.go:580]     Audit-Id: a1806582-30d9-4108-8e95-686618358d66
	I0321 22:04:57.695877  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-860915","namespace":"kube-system","uid":"1a170ba9-55b2-4275-be35-718bde52ddc2","resourceVersion":"272","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"689743ba99dabd4496ce52934787709c","kubernetes.io/config.mirror":"689743ba99dabd4496ce52934787709c","kubernetes.io/config.seen":"2023-03-21T22:04:28.326472146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0321 22:04:57.893603  153142 request.go:622] Waited for 197.372537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:57.893653  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:57.893657  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.893665  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.893671  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.895717  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:57.895734  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.895740  153142 round_trippers.go:580]     Audit-Id: 4aa6391f-9e9f-4a34-b41a-59611fddc63a
	I0321 22:04:57.895746  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.895751  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.895756  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.895762  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.895767  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.895875  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:57.896145  153142 pod_ready.go:92] pod "kube-scheduler-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:04:57.896156  153142 pod_ready.go:81] duration metric: took 383.055576ms waiting for pod "kube-scheduler-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.896169  153142 pod_ready.go:38] duration metric: took 16.476205427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0321 22:04:57.896193  153142 api_server.go:51] waiting for apiserver process to appear ...
	I0321 22:04:57.896234  153142 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:04:57.905449  153142 command_runner.go:130] > 2095
	I0321 22:04:57.906179  153142 api_server.go:71] duration metric: took 16.835670344s to wait for apiserver process to appear ...
	I0321 22:04:57.906198  153142 api_server.go:87] waiting for apiserver healthz status ...
	I0321 22:04:57.906208  153142 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0321 22:04:57.910085  153142 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0321 22:04:57.910130  153142 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0321 22:04:57.910140  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.910149  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.910156  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.910809  153142 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0321 22:04:57.910823  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.910830  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.910835  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.910842  153142 round_trippers.go:580]     Content-Length: 263
	I0321 22:04:57.910847  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.910853  153142 round_trippers.go:580]     Audit-Id: 5240da1a-0fad-4aab-a6bb-b35fc4eed4dd
	I0321 22:04:57.910858  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.910863  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.910879  153142 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.2",
	  "gitCommit": "fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b",
	  "gitTreeState": "clean",
	  "buildDate": "2023-02-22T13:32:22Z",
	  "goVersion": "go1.19.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0321 22:04:57.910939  153142 api_server.go:140] control plane version: v1.26.2
	I0321 22:04:57.910951  153142 api_server.go:130] duration metric: took 4.748684ms to wait for apiserver health ...
	I0321 22:04:57.910957  153142 system_pods.go:43] waiting for kube-system pods to appear ...
	I0321 22:04:58.093331  153142 request.go:622] Waited for 182.317735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0321 22:04:58.093377  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0321 22:04:58.093382  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:58.093390  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:58.093397  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:58.096514  153142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0321 22:04:58.096546  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:58.096558  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:58.096568  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:58.096577  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:58 GMT
	I0321 22:04:58.096589  153142 round_trippers.go:580]     Audit-Id: f334a6ef-589e-49c3-84c7-5c448cd0a57a
	I0321 22:04:58.096601  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:58.096614  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:58.097037  153142 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-787d4945fb-wx8p9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8b510dd8-761f-469a-8ccc-d08beb282e56","resourceVersion":"425","creationTimestamp":"2023-03-21T22:04:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0321 22:04:58.098717  153142 system_pods.go:59] 8 kube-system pods found
	I0321 22:04:58.098738  153142 system_pods.go:61] "coredns-787d4945fb-wx8p9" [8b510dd8-761f-469a-8ccc-d08beb282e56] Running
	I0321 22:04:58.098745  153142 system_pods.go:61] "etcd-multinode-860915" [8e7f12bc-76cf-4d36-8034-4dadd4ba1c5b] Running
	I0321 22:04:58.098750  153142 system_pods.go:61] "kindnet-wnjrv" [2a3b424c-5776-46cc-8cce-675ab8d20f34] Running
	I0321 22:04:58.098757  153142 system_pods.go:61] "kube-apiserver-multinode-860915" [1f990298-d202-4148-ac4a-b5f713f9fd83] Running
	I0321 22:04:58.098762  153142 system_pods.go:61] "kube-controller-manager-multinode-860915" [6c6a8cd8-e27e-40e9-910f-c3d9b56c6882] Running
	I0321 22:04:58.098768  153142 system_pods.go:61] "kube-proxy-97hnd" [a92d55d8-3ec3-4e8e-b31c-f24fcb440600] Running
	I0321 22:04:58.098772  153142 system_pods.go:61] "kube-scheduler-multinode-860915" [1a170ba9-55b2-4275-be35-718bde52ddc2] Running
	I0321 22:04:58.098779  153142 system_pods.go:61] "storage-provisioner" [07f8352f-22bf-4948-aff5-af3a33cfb84e] Running
	I0321 22:04:58.098784  153142 system_pods.go:74] duration metric: took 187.822987ms to wait for pod list to return data ...
	I0321 22:04:58.098794  153142 default_sa.go:34] waiting for default service account to be created ...
	I0321 22:04:58.293201  153142 request.go:622] Waited for 194.331163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0321 22:04:58.293247  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0321 22:04:58.293260  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:58.293267  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:58.293277  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:58.295398  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:58.295416  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:58.295423  153142 round_trippers.go:580]     Content-Length: 261
	I0321 22:04:58.295429  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:58 GMT
	I0321 22:04:58.295435  153142 round_trippers.go:580]     Audit-Id: aa9d2012-2c1b-4f10-9c7a-1a3ffb9db0f4
	I0321 22:04:58.295441  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:58.295446  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:58.295452  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:58.295461  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:58.295479  153142 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b8e9d182-d215-4cde-9add-038eb5f0ad0b","resourceVersion":"301","creationTimestamp":"2023-03-21T22:04:40Z"}}]}
	I0321 22:04:58.295650  153142 default_sa.go:45] found service account: "default"
	I0321 22:04:58.295663  153142 default_sa.go:55] duration metric: took 196.856404ms for default service account to be created ...
	I0321 22:04:58.295670  153142 system_pods.go:116] waiting for k8s-apps to be running ...
	I0321 22:04:58.493081  153142 request.go:622] Waited for 197.353191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0321 22:04:58.493140  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0321 22:04:58.493145  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:58.493153  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:58.493161  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:58.496196  153142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0321 22:04:58.496223  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:58.496234  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:58.496244  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:58 GMT
	I0321 22:04:58.496251  153142 round_trippers.go:580]     Audit-Id: b42c8d62-89ce-463d-b67b-970e7798183d
	I0321 22:04:58.496265  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:58.496278  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:58.496287  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:58.496663  153142 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-787d4945fb-wx8p9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8b510dd8-761f-469a-8ccc-d08beb282e56","resourceVersion":"425","creationTimestamp":"2023-03-21T22:04:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0321 22:04:58.498371  153142 system_pods.go:86] 8 kube-system pods found
	I0321 22:04:58.498392  153142 system_pods.go:89] "coredns-787d4945fb-wx8p9" [8b510dd8-761f-469a-8ccc-d08beb282e56] Running
	I0321 22:04:58.498401  153142 system_pods.go:89] "etcd-multinode-860915" [8e7f12bc-76cf-4d36-8034-4dadd4ba1c5b] Running
	I0321 22:04:58.498411  153142 system_pods.go:89] "kindnet-wnjrv" [2a3b424c-5776-46cc-8cce-675ab8d20f34] Running
	I0321 22:04:58.498421  153142 system_pods.go:89] "kube-apiserver-multinode-860915" [1f990298-d202-4148-ac4a-b5f713f9fd83] Running
	I0321 22:04:58.498436  153142 system_pods.go:89] "kube-controller-manager-multinode-860915" [6c6a8cd8-e27e-40e9-910f-c3d9b56c6882] Running
	I0321 22:04:58.498443  153142 system_pods.go:89] "kube-proxy-97hnd" [a92d55d8-3ec3-4e8e-b31c-f24fcb440600] Running
	I0321 22:04:58.498453  153142 system_pods.go:89] "kube-scheduler-multinode-860915" [1a170ba9-55b2-4275-be35-718bde52ddc2] Running
	I0321 22:04:58.498462  153142 system_pods.go:89] "storage-provisioner" [07f8352f-22bf-4948-aff5-af3a33cfb84e] Running
	I0321 22:04:58.498475  153142 system_pods.go:126] duration metric: took 202.799516ms to wait for k8s-apps to be running ...
	I0321 22:04:58.498485  153142 system_svc.go:44] waiting for kubelet service to be running ....
	I0321 22:04:58.498532  153142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0321 22:04:58.507964  153142 system_svc.go:56] duration metric: took 9.473324ms WaitForService to wait for kubelet.
	I0321 22:04:58.507990  153142 kubeadm.go:578] duration metric: took 17.437477807s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0321 22:04:58.508022  153142 node_conditions.go:102] verifying NodePressure condition ...
	I0321 22:04:58.693460  153142 request.go:622] Waited for 185.34951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0321 22:04:58.693506  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0321 22:04:58.693511  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:58.693518  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:58.693525  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:58.695752  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:58.695771  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:58.695778  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:58.695784  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:58.695789  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:58.695796  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:58.695806  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:58 GMT
	I0321 22:04:58.695814  153142 round_trippers.go:580]     Audit-Id: 49dbfe2d-7e76-40b6-943c-5da452a35ee6
	I0321 22:04:58.695977  153142 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5052 chars]
	I0321 22:04:58.696326  153142 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0321 22:04:58.696353  153142 node_conditions.go:123] node cpu capacity is 8
	I0321 22:04:58.696367  153142 node_conditions.go:105] duration metric: took 188.339387ms to run NodePressure ...
	I0321 22:04:58.696377  153142 start.go:228] waiting for startup goroutines ...
	I0321 22:04:58.696385  153142 start.go:233] waiting for cluster config update ...
	I0321 22:04:58.696397  153142 start.go:242] writing updated cluster config ...
	I0321 22:04:58.699416  153142 out.go:177] 
	I0321 22:04:58.701316  153142 config.go:182] Loaded profile config "multinode-860915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0321 22:04:58.701389  153142 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/config.json ...
	I0321 22:04:58.703369  153142 out.go:177] * Starting worker node multinode-860915-m02 in cluster multinode-860915
	I0321 22:04:58.704754  153142 cache.go:120] Beginning downloading kic base image for docker with docker
	I0321 22:04:58.706275  153142 out.go:177] * Pulling base image ...
	I0321 22:04:58.708101  153142 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0321 22:04:58.708125  153142 cache.go:57] Caching tarball of preloaded images
	I0321 22:04:58.708125  153142 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0321 22:04:58.708217  153142 preload.go:174] Found /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0321 22:04:58.708234  153142 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0321 22:04:58.708340  153142 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/config.json ...
	I0321 22:04:58.772762  153142 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0321 22:04:58.772785  153142 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0321 22:04:58.772803  153142 cache.go:193] Successfully downloaded all kic artifacts
	I0321 22:04:58.772829  153142 start.go:364] acquiring machines lock for multinode-860915-m02: {Name:mk031987672620a4f648b7cea3a75ff5f4c6353f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0321 22:04:58.772925  153142 start.go:368] acquired machines lock for "multinode-860915-m02" in 77.142µs
	I0321 22:04:58.772950  153142 start.go:93] Provisioning new machine with config: &{Name:multinode-860915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-860915 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0321 22:04:58.773036  153142 start.go:125] createHost starting for "m02" (driver="docker")
	I0321 22:04:58.775364  153142 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0321 22:04:58.775463  153142 start.go:159] libmachine.API.Create for "multinode-860915" (driver="docker")
	I0321 22:04:58.775486  153142 client.go:168] LocalClient.Create starting
	I0321 22:04:58.775556  153142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem
	I0321 22:04:58.775587  153142 main.go:141] libmachine: Decoding PEM data...
	I0321 22:04:58.775602  153142 main.go:141] libmachine: Parsing certificate...
	I0321 22:04:58.775653  153142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem
	I0321 22:04:58.775671  153142 main.go:141] libmachine: Decoding PEM data...
	I0321 22:04:58.775682  153142 main.go:141] libmachine: Parsing certificate...
	I0321 22:04:58.775867  153142 cli_runner.go:164] Run: docker network inspect multinode-860915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0321 22:04:58.840933  153142 network_create.go:76] Found existing network {name:multinode-860915 subnet:0xc001946000 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0321 22:04:58.840972  153142 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-860915-m02" container
	I0321 22:04:58.841041  153142 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0321 22:04:58.908626  153142 cli_runner.go:164] Run: docker volume create multinode-860915-m02 --label name.minikube.sigs.k8s.io=multinode-860915-m02 --label created_by.minikube.sigs.k8s.io=true
	I0321 22:04:58.975214  153142 oci.go:103] Successfully created a docker volume multinode-860915-m02
	I0321 22:04:58.975288  153142 cli_runner.go:164] Run: docker run --rm --name multinode-860915-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-860915-m02 --entrypoint /usr/bin/test -v multinode-860915-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
	I0321 22:04:59.586775  153142 oci.go:107] Successfully prepared a docker volume multinode-860915-m02
	I0321 22:04:59.586814  153142 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0321 22:04:59.586837  153142 kic.go:190] Starting extracting preloaded images to volume ...
	I0321 22:04:59.586897  153142 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-860915-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
	I0321 22:05:04.543271  153142 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-860915-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir: (4.956327495s)
	I0321 22:05:04.543300  153142 kic.go:199] duration metric: took 4.956459 seconds to extract preloaded images to volume
	W0321 22:05:04.543411  153142 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0321 22:05:04.543487  153142 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0321 22:05:04.661899  153142 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-860915-m02 --name multinode-860915-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-860915-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-860915-m02 --network multinode-860915 --ip 192.168.58.3 --volume multinode-860915-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978
	I0321 22:05:05.070332  153142 cli_runner.go:164] Run: docker container inspect multinode-860915-m02 --format={{.State.Running}}
	I0321 22:05:05.136415  153142 cli_runner.go:164] Run: docker container inspect multinode-860915-m02 --format={{.State.Status}}
	I0321 22:05:05.204463  153142 cli_runner.go:164] Run: docker exec multinode-860915-m02 stat /var/lib/dpkg/alternatives/iptables
	I0321 22:05:05.320439  153142 oci.go:144] the created container "multinode-860915-m02" has a running status.
	I0321 22:05:05.320471  153142 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa...
	I0321 22:05:05.489497  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0321 22:05:05.489545  153142 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0321 22:05:05.606050  153142 cli_runner.go:164] Run: docker container inspect multinode-860915-m02 --format={{.State.Status}}
	I0321 22:05:05.674950  153142 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0321 22:05:05.674968  153142 kic_runner.go:114] Args: [docker exec --privileged multinode-860915-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0321 22:05:05.793797  153142 cli_runner.go:164] Run: docker container inspect multinode-860915-m02 --format={{.State.Status}}
	I0321 22:05:05.856138  153142 machine.go:88] provisioning docker machine ...
	I0321 22:05:05.856175  153142 ubuntu.go:169] provisioning hostname "multinode-860915-m02"
	I0321 22:05:05.856226  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:05.919039  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:05:05.919458  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0321 22:05:05.919475  153142 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-860915-m02 && echo "multinode-860915-m02" | sudo tee /etc/hostname
	I0321 22:05:06.041515  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-860915-m02
	
	I0321 22:05:06.041592  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:06.104393  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:05:06.104852  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0321 22:05:06.104882  153142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-860915-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-860915-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-860915-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0321 22:05:06.217420  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0321 22:05:06.217449  153142 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16124-3841/.minikube CaCertPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16124-3841/.minikube}
	I0321 22:05:06.217468  153142 ubuntu.go:177] setting up certificates
	I0321 22:05:06.217477  153142 provision.go:83] configureAuth start
	I0321 22:05:06.217521  153142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860915-m02
	I0321 22:05:06.279578  153142 provision.go:138] copyHostCerts
	I0321 22:05:06.279624  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem
	I0321 22:05:06.279666  153142 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem, removing ...
	I0321 22:05:06.279677  153142 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem
	I0321 22:05:06.279756  153142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem (1082 bytes)
	I0321 22:05:06.279826  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem
	I0321 22:05:06.279846  153142 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem, removing ...
	I0321 22:05:06.279850  153142 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem
	I0321 22:05:06.279880  153142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem (1123 bytes)
	I0321 22:05:06.279942  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem
	I0321 22:05:06.279965  153142 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem, removing ...
	I0321 22:05:06.279970  153142 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem
	I0321 22:05:06.279999  153142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem (1675 bytes)
	I0321 22:05:06.280940  153142 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca-key.pem org=jenkins.multinode-860915-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-860915-m02]
	I0321 22:05:06.424772  153142 provision.go:172] copyRemoteCerts
	I0321 22:05:06.424824  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0321 22:05:06.424855  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:06.486712  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa Username:docker}
	I0321 22:05:06.568628  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0321 22:05:06.568684  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0321 22:05:06.585328  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0321 22:05:06.585397  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0321 22:05:06.602009  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0321 22:05:06.602086  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0321 22:05:06.618149  153142 provision.go:86] duration metric: configureAuth took 400.663052ms
	I0321 22:05:06.618171  153142 ubuntu.go:193] setting minikube options for container-runtime
	I0321 22:05:06.618333  153142 config.go:182] Loaded profile config "multinode-860915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0321 22:05:06.618391  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:06.681414  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:05:06.681833  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0321 22:05:06.681851  153142 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0321 22:05:06.793802  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0321 22:05:06.793841  153142 ubuntu.go:71] root file system type: overlay
	I0321 22:05:06.793960  153142 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0321 22:05:06.794007  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:06.857202  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:05:06.857616  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0321 22:05:06.857675  153142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0321 22:05:06.977731  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0321 22:05:06.977794  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:07.038293  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:05:07.038752  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0321 22:05:07.038777  153142 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0321 22:05:07.653684  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-21 22:05:06.973195044 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0321 22:05:07.653720  153142 machine.go:91] provisioned docker machine in 1.79755383s
	I0321 22:05:07.653734  153142 client.go:171] LocalClient.Create took 8.878240806s
	I0321 22:05:07.653758  153142 start.go:167] duration metric: libmachine.API.Create for "multinode-860915" took 8.878293644s
	I0321 22:05:07.653771  153142 start.go:300] post-start starting for "multinode-860915-m02" (driver="docker")
	I0321 22:05:07.653785  153142 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0321 22:05:07.653849  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0321 22:05:07.653910  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:07.719963  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa Username:docker}
	I0321 22:05:07.805010  153142 ssh_runner.go:195] Run: cat /etc/os-release
	I0321 22:05:07.807384  153142 command_runner.go:130] > NAME="Ubuntu"
	I0321 22:05:07.807404  153142 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0321 22:05:07.807411  153142 command_runner.go:130] > ID=ubuntu
	I0321 22:05:07.807419  153142 command_runner.go:130] > ID_LIKE=debian
	I0321 22:05:07.807427  153142 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0321 22:05:07.807432  153142 command_runner.go:130] > VERSION_ID="20.04"
	I0321 22:05:07.807437  153142 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0321 22:05:07.807445  153142 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0321 22:05:07.807450  153142 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0321 22:05:07.807460  153142 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0321 22:05:07.807465  153142 command_runner.go:130] > VERSION_CODENAME=focal
	I0321 22:05:07.807470  153142 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0321 22:05:07.807541  153142 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0321 22:05:07.807557  153142 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0321 22:05:07.807566  153142 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0321 22:05:07.807576  153142 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0321 22:05:07.807590  153142 filesync.go:126] Scanning /home/jenkins/minikube-integration/16124-3841/.minikube/addons for local assets ...
	I0321 22:05:07.807643  153142 filesync.go:126] Scanning /home/jenkins/minikube-integration/16124-3841/.minikube/files for local assets ...
	I0321 22:05:07.807731  153142 filesync.go:149] local asset: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem -> 105322.pem in /etc/ssl/certs
	I0321 22:05:07.807742  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem -> /etc/ssl/certs/105322.pem
	I0321 22:05:07.807845  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0321 22:05:07.813962  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem --> /etc/ssl/certs/105322.pem (1708 bytes)
	I0321 22:05:07.829637  153142 start.go:303] post-start completed in 175.84866ms
	I0321 22:05:07.829954  153142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860915-m02
	I0321 22:05:07.893904  153142 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/config.json ...
	I0321 22:05:07.894214  153142 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0321 22:05:07.894268  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:07.955636  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa Username:docker}
	I0321 22:05:08.038242  153142 command_runner.go:130] > 17%!
	(MISSING)I0321 22:05:08.038313  153142 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0321 22:05:08.041694  153142 command_runner.go:130] > 242G
	I0321 22:05:08.041797  153142 start.go:128] duration metric: createHost completed in 9.268749395s
	I0321 22:05:08.041820  153142 start.go:83] releasing machines lock for "multinode-860915-m02", held for 9.268883029s
	I0321 22:05:08.041883  153142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860915-m02
	I0321 22:05:08.106692  153142 out.go:177] * Found network options:
	I0321 22:05:08.108261  153142 out.go:177]   - NO_PROXY=192.168.58.2
	W0321 22:05:08.109568  153142 proxy.go:119] fail to check proxy env: Error ip not in block
	W0321 22:05:08.109612  153142 proxy.go:119] fail to check proxy env: Error ip not in block
	I0321 22:05:08.109685  153142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0321 22:05:08.109729  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:08.109731  153142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0321 22:05:08.109775  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:08.179407  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa Username:docker}
	I0321 22:05:08.183357  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa Username:docker}
	I0321 22:05:08.291318  153142 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0321 22:05:08.291384  153142 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0321 22:05:08.291402  153142 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0321 22:05:08.291411  153142 command_runner.go:130] > Device: c5h/197d	Inode: 1322525     Links: 1
	I0321 22:05:08.291420  153142 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0321 22:05:08.291431  153142 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0321 22:05:08.291442  153142 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0321 22:05:08.291452  153142 command_runner.go:130] > Change: 2023-03-21 21:49:53.137271995 +0000
	I0321 22:05:08.291463  153142 command_runner.go:130] >  Birth: -
	I0321 22:05:08.291523  153142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0321 22:05:08.311114  153142 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0321 22:05:08.311171  153142 ssh_runner.go:195] Run: which cri-dockerd
	I0321 22:05:08.313632  153142 command_runner.go:130] > /usr/bin/cri-dockerd
	I0321 22:05:08.313840  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0321 22:05:08.319968  153142 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0321 22:05:08.331810  153142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0321 22:05:08.345433  153142 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0321 22:05:08.345474  153142 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0321 22:05:08.345487  153142 start.go:485] detecting cgroup driver to use...
	I0321 22:05:08.345518  153142 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0321 22:05:08.345606  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0321 22:05:08.356340  153142 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0321 22:05:08.357014  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0321 22:05:08.364057  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0321 22:05:08.370943  153142 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0321 22:05:08.370991  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0321 22:05:08.377859  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0321 22:05:08.384998  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0321 22:05:08.392019  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0321 22:05:08.398930  153142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0321 22:05:08.405637  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0321 22:05:08.412550  153142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0321 22:05:08.418304  153142 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0321 22:05:08.418353  153142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0321 22:05:08.423937  153142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0321 22:05:08.498504  153142 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0321 22:05:08.576442  153142 start.go:485] detecting cgroup driver to use...
	I0321 22:05:08.576497  153142 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0321 22:05:08.576544  153142 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0321 22:05:08.585171  153142 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0321 22:05:08.585189  153142 command_runner.go:130] > [Unit]
	I0321 22:05:08.585197  153142 command_runner.go:130] > Description=Docker Application Container Engine
	I0321 22:05:08.585206  153142 command_runner.go:130] > Documentation=https://docs.docker.com
	I0321 22:05:08.585212  153142 command_runner.go:130] > BindsTo=containerd.service
	I0321 22:05:08.585221  153142 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0321 22:05:08.585228  153142 command_runner.go:130] > Wants=network-online.target
	I0321 22:05:08.585238  153142 command_runner.go:130] > Requires=docker.socket
	I0321 22:05:08.585243  153142 command_runner.go:130] > StartLimitBurst=3
	I0321 22:05:08.585247  153142 command_runner.go:130] > StartLimitIntervalSec=60
	I0321 22:05:08.585251  153142 command_runner.go:130] > [Service]
	I0321 22:05:08.585255  153142 command_runner.go:130] > Type=notify
	I0321 22:05:08.585258  153142 command_runner.go:130] > Restart=on-failure
	I0321 22:05:08.585262  153142 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0321 22:05:08.585270  153142 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0321 22:05:08.585288  153142 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0321 22:05:08.585301  153142 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0321 22:05:08.585317  153142 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0321 22:05:08.585329  153142 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0321 22:05:08.585338  153142 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0321 22:05:08.585349  153142 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0321 22:05:08.585366  153142 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0321 22:05:08.585381  153142 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0321 22:05:08.585391  153142 command_runner.go:130] > ExecStart=
	I0321 22:05:08.585412  153142 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0321 22:05:08.585428  153142 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0321 22:05:08.585434  153142 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0321 22:05:08.585440  153142 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0321 22:05:08.585445  153142 command_runner.go:130] > LimitNOFILE=infinity
	I0321 22:05:08.585449  153142 command_runner.go:130] > LimitNPROC=infinity
	I0321 22:05:08.585453  153142 command_runner.go:130] > LimitCORE=infinity
	I0321 22:05:08.585462  153142 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0321 22:05:08.585467  153142 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0321 22:05:08.585475  153142 command_runner.go:130] > TasksMax=infinity
	I0321 22:05:08.585479  153142 command_runner.go:130] > TimeoutStartSec=0
	I0321 22:05:08.585487  153142 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0321 22:05:08.585491  153142 command_runner.go:130] > Delegate=yes
	I0321 22:05:08.585501  153142 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0321 22:05:08.585505  153142 command_runner.go:130] > KillMode=process
	I0321 22:05:08.585509  153142 command_runner.go:130] > [Install]
	I0321 22:05:08.585513  153142 command_runner.go:130] > WantedBy=multi-user.target
	I0321 22:05:08.585992  153142 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0321 22:05:08.586080  153142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0321 22:05:08.595607  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0321 22:05:08.608091  153142 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0321 22:05:08.609382  153142 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0321 22:05:08.715913  153142 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0321 22:05:08.810986  153142 docker.go:531] configuring docker to use "cgroupfs" as cgroup driver...
	I0321 22:05:08.811029  153142 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0321 22:05:08.824097  153142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0321 22:05:08.907829  153142 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0321 22:05:09.104797  153142 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0321 22:05:09.177286  153142 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0321 22:05:09.177361  153142 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0321 22:05:09.246711  153142 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0321 22:05:09.328389  153142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0321 22:05:09.404764  153142 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0321 22:05:09.415005  153142 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0321 22:05:09.415056  153142 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0321 22:05:09.417745  153142 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0321 22:05:09.417760  153142 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0321 22:05:09.417766  153142 command_runner.go:130] > Device: d3h/211d	Inode: 206         Links: 1
	I0321 22:05:09.417773  153142 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0321 22:05:09.417778  153142 command_runner.go:130] > Access: 2023-03-21 22:05:09.409440070 +0000
	I0321 22:05:09.417783  153142 command_runner.go:130] > Modify: 2023-03-21 22:05:09.409440070 +0000
	I0321 22:05:09.417789  153142 command_runner.go:130] > Change: 2023-03-21 22:05:09.409440070 +0000
	I0321 22:05:09.417793  153142 command_runner.go:130] >  Birth: -
	I0321 22:05:09.417871  153142 start.go:553] Will wait 60s for crictl version
	I0321 22:05:09.417906  153142 ssh_runner.go:195] Run: which crictl
	I0321 22:05:09.420246  153142 command_runner.go:130] > /usr/bin/crictl
	I0321 22:05:09.420368  153142 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0321 22:05:09.493861  153142 command_runner.go:130] > Version:  0.1.0
	I0321 22:05:09.493885  153142 command_runner.go:130] > RuntimeName:  docker
	I0321 22:05:09.493889  153142 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0321 22:05:09.493894  153142 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0321 22:05:09.493909  153142 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0321 22:05:09.493956  153142 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0321 22:05:09.515851  153142 command_runner.go:130] > 23.0.1
	I0321 22:05:09.515914  153142 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0321 22:05:09.534342  153142 command_runner.go:130] > 23.0.1
	I0321 22:05:09.537217  153142 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
	I0321 22:05:09.538644  153142 out.go:177]   - env NO_PROXY=192.168.58.2
	I0321 22:05:09.540044  153142 cli_runner.go:164] Run: docker network inspect multinode-860915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0321 22:05:09.603538  153142 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0321 22:05:09.606651  153142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0321 22:05:09.615314  153142 certs.go:56] Setting up /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915 for IP: 192.168.58.3
	I0321 22:05:09.615337  153142 certs.go:186] acquiring lock for shared ca certs: {Name:mke51456f2089c678c8a8085b7dd3883448bd6b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:05:09.615461  153142 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.key
	I0321 22:05:09.615509  153142 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.key
	I0321 22:05:09.615526  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0321 22:05:09.615538  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0321 22:05:09.615550  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0321 22:05:09.615561  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0321 22:05:09.615619  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532.pem (1338 bytes)
	W0321 22:05:09.615654  153142 certs.go:397] ignoring /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532_empty.pem, impossibly tiny 0 bytes
	I0321 22:05:09.615664  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca-key.pem (1675 bytes)
	I0321 22:05:09.615697  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem (1082 bytes)
	I0321 22:05:09.615732  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem (1123 bytes)
	I0321 22:05:09.615761  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem (1675 bytes)
	I0321 22:05:09.615818  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem (1708 bytes)
	I0321 22:05:09.615850  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:05:09.615869  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532.pem -> /usr/share/ca-certificates/10532.pem
	I0321 22:05:09.615888  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem -> /usr/share/ca-certificates/105322.pem
	I0321 22:05:09.616290  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0321 22:05:09.632246  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0321 22:05:09.648150  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0321 22:05:09.663428  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0321 22:05:09.678610  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0321 22:05:09.694413  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532.pem --> /usr/share/ca-certificates/10532.pem (1338 bytes)
	I0321 22:05:09.709875  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem --> /usr/share/ca-certificates/105322.pem (1708 bytes)
	I0321 22:05:09.725404  153142 ssh_runner.go:195] Run: openssl version
	I0321 22:05:09.729460  153142 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0321 22:05:09.729651  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0321 22:05:09.736146  153142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:05:09.738843  153142 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 21 21:50 /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:05:09.738899  153142 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 21 21:50 /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:05:09.738943  153142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:05:09.743066  153142 command_runner.go:130] > b5213941
	I0321 22:05:09.743253  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0321 22:05:09.749742  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10532.pem && ln -fs /usr/share/ca-certificates/10532.pem /etc/ssl/certs/10532.pem"
	I0321 22:05:09.756329  153142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10532.pem
	I0321 22:05:09.758933  153142 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 21 21:53 /usr/share/ca-certificates/10532.pem
	I0321 22:05:09.759014  153142 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 21 21:53 /usr/share/ca-certificates/10532.pem
	I0321 22:05:09.759062  153142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10532.pem
	I0321 22:05:09.763087  153142 command_runner.go:130] > 51391683
	I0321 22:05:09.763252  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10532.pem /etc/ssl/certs/51391683.0"
	I0321 22:05:09.769587  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105322.pem && ln -fs /usr/share/ca-certificates/105322.pem /etc/ssl/certs/105322.pem"
	I0321 22:05:09.776224  153142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105322.pem
	I0321 22:05:09.778861  153142 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 21 21:53 /usr/share/ca-certificates/105322.pem
	I0321 22:05:09.778966  153142 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 21 21:53 /usr/share/ca-certificates/105322.pem
	I0321 22:05:09.779006  153142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105322.pem
	I0321 22:05:09.783165  153142 command_runner.go:130] > 3ec20f2e
	I0321 22:05:09.783322  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105322.pem /etc/ssl/certs/3ec20f2e.0"
	I0321 22:05:09.789821  153142 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0321 22:05:09.809969  153142 command_runner.go:130] > cgroupfs
	I0321 22:05:09.810896  153142 cni.go:84] Creating CNI manager for ""
	I0321 22:05:09.810910  153142 cni.go:136] 2 nodes found, recommending kindnet
	I0321 22:05:09.810918  153142 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0321 22:05:09.810935  153142 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-860915 NodeName:multinode-860915-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0321 22:05:09.811040  153142 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-860915-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0321 22:05:09.811092  153142 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-860915-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:multinode-860915 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0321 22:05:09.811129  153142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0321 22:05:09.817360  153142 command_runner.go:130] > kubeadm
	I0321 22:05:09.817373  153142 command_runner.go:130] > kubectl
	I0321 22:05:09.817378  153142 command_runner.go:130] > kubelet
	I0321 22:05:09.817873  153142 binaries.go:44] Found k8s binaries, skipping transfer
	I0321 22:05:09.817915  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0321 22:05:09.823861  153142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0321 22:05:09.835199  153142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0321 22:05:09.847031  153142 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0321 22:05:09.849827  153142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0321 22:05:09.858525  153142 host.go:66] Checking if "multinode-860915" exists ...
	I0321 22:05:09.858744  153142 config.go:182] Loaded profile config "multinode-860915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0321 22:05:09.858775  153142 start.go:301] JoinCluster: &{Name:multinode-860915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-860915 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 22:05:09.858873  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0321 22:05:09.858916  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:05:09.920238  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:05:10.051021  153142 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token rd21xo.u261sww8h66igfey --discovery-token-ca-cert-hash sha256:4d1226b8391a06c26001ed628fb6437347623d36214654a185910576da6ae050 
	I0321 22:05:10.055139  153142 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0321 22:05:10.055179  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rd21xo.u261sww8h66igfey --discovery-token-ca-cert-hash sha256:4d1226b8391a06c26001ed628fb6437347623d36214654a185910576da6ae050 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-860915-m02"
	I0321 22:05:10.089341  153142 command_runner.go:130] ! W0321 22:05:10.089004    1339 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0321 22:05:10.114065  153142 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1030-gcp\n", err: exit status 1
	I0321 22:05:10.178325  153142 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0321 22:05:11.811827  153142 command_runner.go:130] > [preflight] Running pre-flight checks
	I0321 22:05:11.811856  153142 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0321 22:05:11.811868  153142 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1030-gcp
	I0321 22:05:11.811875  153142 command_runner.go:130] > OS: Linux
	I0321 22:05:11.811883  153142 command_runner.go:130] > CGROUPS_CPU: enabled
	I0321 22:05:11.811893  153142 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0321 22:05:11.811905  153142 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0321 22:05:11.811917  153142 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0321 22:05:11.811929  153142 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0321 22:05:11.811940  153142 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0321 22:05:11.811952  153142 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0321 22:05:11.811961  153142 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0321 22:05:11.811972  153142 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0321 22:05:11.811985  153142 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0321 22:05:11.812000  153142 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0321 22:05:11.812014  153142 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0321 22:05:11.812029  153142 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0321 22:05:11.812041  153142 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0321 22:05:11.812060  153142 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0321 22:05:11.812071  153142 command_runner.go:130] > This node has joined the cluster:
	I0321 22:05:11.812081  153142 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0321 22:05:11.812094  153142 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0321 22:05:11.812108  153142 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0321 22:05:11.812135  153142 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rd21xo.u261sww8h66igfey --discovery-token-ca-cert-hash sha256:4d1226b8391a06c26001ed628fb6437347623d36214654a185910576da6ae050 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-860915-m02": (1.756941479s)
	I0321 22:05:11.812157  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0321 22:05:11.984273  153142 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0321 22:05:11.984314  153142 start.go:303] JoinCluster complete in 2.125536119s
	I0321 22:05:11.984327  153142 cni.go:84] Creating CNI manager for ""
	I0321 22:05:11.984334  153142 cni.go:136] 2 nodes found, recommending kindnet
	I0321 22:05:11.984376  153142 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0321 22:05:11.987408  153142 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0321 22:05:11.987431  153142 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0321 22:05:11.987442  153142 command_runner.go:130] > Device: 36h/54d	Inode: 1320614     Links: 1
	I0321 22:05:11.987451  153142 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0321 22:05:11.987460  153142 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0321 22:05:11.987468  153142 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0321 22:05:11.987482  153142 command_runner.go:130] > Change: 2023-03-21 21:49:52.361193928 +0000
	I0321 22:05:11.987490  153142 command_runner.go:130] >  Birth: -
	I0321 22:05:11.987546  153142 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
	I0321 22:05:11.987555  153142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0321 22:05:11.999294  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0321 22:05:12.148289  153142 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0321 22:05:12.151758  153142 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0321 22:05:12.153628  153142 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0321 22:05:12.164270  153142 command_runner.go:130] > daemonset.apps/kindnet configured
	I0321 22:05:12.168684  153142 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 22:05:12.168909  153142 kapi.go:59] client config for multinode-860915: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key", CAFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0321 22:05:12.169279  153142 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0321 22:05:12.169296  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.169308  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.169322  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.172517  153142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0321 22:05:12.172547  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.172557  153142 round_trippers.go:580]     Audit-Id: 90714f64-4c46-48d0-b5ee-de7134882a14
	I0321 22:05:12.172567  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.172579  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.172593  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.172606  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.172619  153142 round_trippers.go:580]     Content-Length: 291
	I0321 22:05:12.172632  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.172660  153142 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7330eee3-984a-48af-9eda-cf12dc6be18f","resourceVersion":"429","creationTimestamp":"2023-03-21T22:04:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0321 22:05:12.172752  153142 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-860915" context rescaled to 1 replicas
	I0321 22:05:12.172785  153142 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0321 22:05:12.175983  153142 out.go:177] * Verifying Kubernetes components...
	I0321 22:05:12.177254  153142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0321 22:05:12.186494  153142 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 22:05:12.186734  153142 kapi.go:59] client config for multinode-860915: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key", CAFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0321 22:05:12.187010  153142 node_ready.go:35] waiting up to 6m0s for node "multinode-860915-m02" to be "Ready" ...
	I0321 22:05:12.187070  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915-m02
	I0321 22:05:12.187080  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.187092  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.187104  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.188533  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.188552  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.188569  153142 round_trippers.go:580]     Audit-Id: da7ae12b-2b46-49cd-863e-ac9aedd72d9a
	I0321 22:05:12.188591  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.188603  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.188613  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.188626  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.188639  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.188746  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915-m02","uid":"a069319f-e992-4fef-b387-9b925cda4dae","resourceVersion":"474","creationTimestamp":"2023-03-21T22:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0321 22:05:12.189119  153142 node_ready.go:49] node "multinode-860915-m02" has status "Ready":"True"
	I0321 22:05:12.189136  153142 node_ready.go:38] duration metric: took 2.111728ms waiting for node "multinode-860915-m02" to be "Ready" ...
	I0321 22:05:12.189146  153142 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0321 22:05:12.189207  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0321 22:05:12.189217  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.189229  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.189244  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.191759  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:05:12.191780  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.191789  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.191798  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.191810  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.191821  153142 round_trippers.go:580]     Audit-Id: eb3e94b9-140a-4b86-91f3-c8fbeeb5d5c7
	I0321 22:05:12.191881  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.191899  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.192303  153142 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"474"},"items":[{"metadata":{"name":"coredns-787d4945fb-wx8p9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8b510dd8-761f-469a-8ccc-d08beb282e56","resourceVersion":"425","creationTimestamp":"2023-03-21T22:04:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65541 chars]
	I0321 22:05:12.194279  153142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-wx8p9" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.194329  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-wx8p9
	I0321 22:05:12.194336  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.194343  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.194350  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.195789  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.195803  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.195810  153142 round_trippers.go:580]     Audit-Id: 3f97851f-367b-4a6b-a6ac-8926fd817b66
	I0321 22:05:12.195817  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.195826  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.195838  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.195848  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.195857  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.195933  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-wx8p9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8b510dd8-761f-469a-8ccc-d08beb282e56","resourceVersion":"425","creationTimestamp":"2023-03-21T22:04:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0321 22:05:12.196329  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:12.196343  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.196350  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.196361  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.197734  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.197751  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.197760  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.197769  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.197782  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.197795  153142 round_trippers.go:580]     Audit-Id: ddb33783-1ccc-458f-a392-60a70e6c3cbf
	I0321 22:05:12.197808  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.197821  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.197910  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"430","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0321 22:05:12.198225  153142 pod_ready.go:92] pod "coredns-787d4945fb-wx8p9" in "kube-system" namespace has status "Ready":"True"
	I0321 22:05:12.198238  153142 pod_ready.go:81] duration metric: took 3.94214ms waiting for pod "coredns-787d4945fb-wx8p9" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.198246  153142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.198283  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-860915
	I0321 22:05:12.198289  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.198296  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.198307  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.199697  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.199716  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.199726  153142 round_trippers.go:580]     Audit-Id: ef100106-9fe1-45be-9c19-bb86c99a3711
	I0321 22:05:12.199733  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.199749  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.199757  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.199766  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.199779  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.199873  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-860915","namespace":"kube-system","uid":"8e7f12bc-76cf-4d36-8034-4dadd4ba1c5b","resourceVersion":"277","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"456cd41a0496a9e8bf278e639437f566","kubernetes.io/config.mirror":"456cd41a0496a9e8bf278e639437f566","kubernetes.io/config.seen":"2023-03-21T22:04:28.326473783Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0321 22:05:12.200208  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:12.200220  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.200227  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.200233  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.201559  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.201583  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.201594  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.201605  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.201617  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.201627  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.201638  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.201647  153142 round_trippers.go:580]     Audit-Id: 06492e11-a65f-4f24-b982-b5d3f503425c
	I0321 22:05:12.201725  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"430","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0321 22:05:12.201969  153142 pod_ready.go:92] pod "etcd-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:05:12.201979  153142 pod_ready.go:81] duration metric: took 3.728684ms waiting for pod "etcd-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.201991  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.202061  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-860915
	I0321 22:05:12.202070  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.202077  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.202083  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.203509  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.203527  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.203535  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.203544  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.203556  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.203578  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.203591  153142 round_trippers.go:580]     Audit-Id: 696fcdfd-00a3-404a-8ef2-0b3705606860
	I0321 22:05:12.203604  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.203749  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-860915","namespace":"kube-system","uid":"1f990298-d202-4148-ac4a-b5f713f9fd83","resourceVersion":"274","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"322dc81533eb2822b571df496b71ca36","kubernetes.io/config.mirror":"322dc81533eb2822b571df496b71ca36","kubernetes.io/config.seen":"2023-03-21T22:04:28.326475235Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0321 22:05:12.204122  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:12.204136  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.204146  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.204156  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.205359  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.205372  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.205381  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.205389  153142 round_trippers.go:580]     Audit-Id: fd155a7e-9abc-48bd-9027-a032ba413833
	I0321 22:05:12.205397  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.205406  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.205417  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.205425  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.205490  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"430","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0321 22:05:12.205741  153142 pod_ready.go:92] pod "kube-apiserver-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:05:12.205750  153142 pod_ready.go:81] duration metric: took 3.753653ms waiting for pod "kube-apiserver-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.205758  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.205791  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-860915
	I0321 22:05:12.205798  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.205806  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.205814  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.207261  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.207281  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.207292  153142 round_trippers.go:580]     Audit-Id: 9f6b2c19-f7d5-4722-8af5-461b29116d43
	I0321 22:05:12.207299  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.207305  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.207312  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.207318  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.207324  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.207434  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-860915","namespace":"kube-system","uid":"6c6a8cd8-e27e-40e9-910f-c3d9b56c6882","resourceVersion":"391","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70cebb20d8a8aec3d68c354124b20828","kubernetes.io/config.mirror":"70cebb20d8a8aec3d68c354124b20828","kubernetes.io/config.seen":"2023-03-21T22:04:28.326453711Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0321 22:05:12.207797  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:12.207811  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.207818  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.207825  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.209041  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.209060  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.209070  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.209079  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.209095  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.209101  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.209108  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.209116  153142 round_trippers.go:580]     Audit-Id: ae5b075f-d639-49d4-9788-1d60acf006bb
	I0321 22:05:12.209176  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"430","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0321 22:05:12.209393  153142 pod_ready.go:92] pod "kube-controller-manager-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:05:12.209403  153142 pod_ready.go:81] duration metric: took 3.637923ms waiting for pod "kube-controller-manager-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.209410  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.387768  153142 request.go:622] Waited for 178.308218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97hnd
	I0321 22:05:12.387812  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97hnd
	I0321 22:05:12.387817  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.387825  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.387831  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.389760  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.389783  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.389791  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.389797  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.389803  153142 round_trippers.go:580]     Audit-Id: baa9643a-bdc0-41af-a67d-65bc0832fff2
	I0321 22:05:12.389808  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.389816  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.389826  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.389917  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-97hnd","generateName":"kube-proxy-","namespace":"kube-system","uid":"a92d55d8-3ec3-4e8e-b31c-f24fcb440600","resourceVersion":"382","creationTimestamp":"2023-03-21T22:04:40Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cda4f06c-b817-4058-b4d0-e429752e2f27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cda4f06c-b817-4058-b4d0-e429752e2f27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0321 22:05:12.587628  153142 request.go:622] Waited for 197.293235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:12.587680  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:12.587686  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.587695  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.587711  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.589622  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.589644  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.589654  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.589666  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.589679  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.589687  153142 round_trippers.go:580]     Audit-Id: 05323963-ef6c-4207-b996-5f204b5fbc0f
	I0321 22:05:12.589696  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.589702  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.589776  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"430","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0321 22:05:12.590100  153142 pod_ready.go:92] pod "kube-proxy-97hnd" in "kube-system" namespace has status "Ready":"True"
	I0321 22:05:12.590113  153142 pod_ready.go:81] duration metric: took 380.697553ms waiting for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.590122  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-slz5b" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.787504  153142 request.go:622] Waited for 197.313998ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slz5b
	I0321 22:05:12.787553  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slz5b
	I0321 22:05:12.787558  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.787565  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.787572  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.789325  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.789348  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.789356  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.789363  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.789368  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.789376  153142 round_trippers.go:580]     Audit-Id: 3b693f6b-d217-4696-a571-ca941983c01e
	I0321 22:05:12.789382  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.789390  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.789478  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slz5b","generateName":"kube-proxy-","namespace":"kube-system","uid":"659a0266-e910-4295-970d-58b18791cad1","resourceVersion":"460","creationTimestamp":"2023-03-21T22:05:11Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cda4f06c-b817-4058-b4d0-e429752e2f27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cda4f06c-b817-4058-b4d0-e429752e2f27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0321 22:05:12.987106  153142 request.go:622] Waited for 197.27474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-860915-m02
	I0321 22:05:12.987171  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915-m02
	I0321 22:05:12.987181  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.987194  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.987205  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.988820  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.988838  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.988845  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.988852  153142 round_trippers.go:580]     Audit-Id: bf3c3f92-5f90-44ee-859f-8b2cc3a03905
	I0321 22:05:12.988857  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.988863  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.988868  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.988874  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.988994  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915-m02","uid":"a069319f-e992-4fef-b387-9b925cda4dae","resourceVersion":"474","creationTimestamp":"2023-03-21T22:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0321 22:05:13.490129  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slz5b
	I0321 22:05:13.490208  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:13.490224  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:13.490233  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:13.492296  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:05:13.492321  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:13.492332  153142 round_trippers.go:580]     Audit-Id: ceb30675-c858-4576-b4f9-a135d4404669
	I0321 22:05:13.492342  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:13.492354  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:13.492370  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:13.492383  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:13.492393  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:13 GMT
	I0321 22:05:13.492507  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slz5b","generateName":"kube-proxy-","namespace":"kube-system","uid":"659a0266-e910-4295-970d-58b18791cad1","resourceVersion":"460","creationTimestamp":"2023-03-21T22:05:11Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cda4f06c-b817-4058-b4d0-e429752e2f27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cda4f06c-b817-4058-b4d0-e429752e2f27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0321 22:05:13.492898  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915-m02
	I0321 22:05:13.492915  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:13.492925  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:13.492933  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:13.494505  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:13.494531  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:13.494541  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:13.494554  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:13.494564  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:13.494577  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:13 GMT
	I0321 22:05:13.494586  153142 round_trippers.go:580]     Audit-Id: 3b21460b-7e60-4f00-8003-91a45cc0124a
	I0321 22:05:13.494595  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:13.494673  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915-m02","uid":"a069319f-e992-4fef-b387-9b925cda4dae","resourceVersion":"474","creationTimestamp":"2023-03-21T22:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0321 22:05:13.989560  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slz5b
	I0321 22:05:13.989585  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:13.989598  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:13.989609  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:13.992160  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:05:13.992184  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:13.992194  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:13.992202  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:13.992212  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:13 GMT
	I0321 22:05:13.992226  153142 round_trippers.go:580]     Audit-Id: 8b1330c9-2f1b-4f03-adae-76c8c5e1465f
	I0321 22:05:13.992235  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:13.992244  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:13.992368  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slz5b","generateName":"kube-proxy-","namespace":"kube-system","uid":"659a0266-e910-4295-970d-58b18791cad1","resourceVersion":"483","creationTimestamp":"2023-03-21T22:05:11Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cda4f06c-b817-4058-b4d0-e429752e2f27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cda4f06c-b817-4058-b4d0-e429752e2f27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0321 22:05:13.992867  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915-m02
	I0321 22:05:13.992881  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:13.992892  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:13.992902  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:13.994841  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:13.994888  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:13.994910  153142 round_trippers.go:580]     Audit-Id: 46de0112-aced-41d0-8d29-b82716c9e4ab
	I0321 22:05:13.994930  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:13.994952  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:13.994981  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:13.994995  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:13.995005  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:13 GMT
	I0321 22:05:13.995108  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915-m02","uid":"a069319f-e992-4fef-b387-9b925cda4dae","resourceVersion":"474","creationTimestamp":"2023-03-21T22:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0321 22:05:13.995435  153142 pod_ready.go:92] pod "kube-proxy-slz5b" in "kube-system" namespace has status "Ready":"True"
	I0321 22:05:13.995459  153142 pod_ready.go:81] duration metric: took 1.405328991s waiting for pod "kube-proxy-slz5b" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:13.995471  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:13.995527  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-860915
	I0321 22:05:13.995536  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:13.995548  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:13.995567  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:13.997349  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:13.997368  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:13.997377  153142 round_trippers.go:580]     Audit-Id: eed98e3f-58ea-417b-93eb-f80c4b72dca6
	I0321 22:05:13.997385  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:13.997420  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:13.997438  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:13.997452  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:13.997462  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:13 GMT
	I0321 22:05:13.997601  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-860915","namespace":"kube-system","uid":"1a170ba9-55b2-4275-be35-718bde52ddc2","resourceVersion":"272","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"689743ba99dabd4496ce52934787709c","kubernetes.io/config.mirror":"689743ba99dabd4496ce52934787709c","kubernetes.io/config.seen":"2023-03-21T22:04:28.326472146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0321 22:05:14.187283  153142 request.go:622] Waited for 189.274605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:14.187359  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:14.187374  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:14.187386  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:14.187401  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:14.189633  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:05:14.189658  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:14.189674  153142 round_trippers.go:580]     Audit-Id: a2e5bdc6-0a45-49a4-99fb-0faeae0a2b55
	I0321 22:05:14.189683  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:14.189698  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:14.189708  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:14.189717  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:14.189750  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:14 GMT
	I0321 22:05:14.194173  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"430","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0321 22:05:14.195107  153142 pod_ready.go:92] pod "kube-scheduler-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:05:14.195129  153142 pod_ready.go:81] duration metric: took 199.645863ms waiting for pod "kube-scheduler-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:14.195145  153142 pod_ready.go:38] duration metric: took 2.005987166s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0321 22:05:14.195176  153142 system_svc.go:44] waiting for kubelet service to be running ....
	I0321 22:05:14.195234  153142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0321 22:05:14.205396  153142 system_svc.go:56] duration metric: took 10.214135ms WaitForService to wait for kubelet.
	I0321 22:05:14.205445  153142 kubeadm.go:578] duration metric: took 2.032624025s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0321 22:05:14.205472  153142 node_conditions.go:102] verifying NodePressure condition ...
	I0321 22:05:14.387879  153142 request.go:622] Waited for 182.316479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0321 22:05:14.387949  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0321 22:05:14.387959  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:14.387972  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:14.387986  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:14.390525  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:05:14.390553  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:14.390564  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:14.390573  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:14.390582  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:14.390591  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:14.390606  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:14 GMT
	I0321 22:05:14.390615  153142 round_trippers.go:580]     Audit-Id: 848a0bc4-9246-4f49-babc-f2e3f61ade69
	I0321 22:05:14.390812  153142 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"484"},"items":[{"metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"430","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10265 chars]
	I0321 22:05:14.391439  153142 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0321 22:05:14.391460  153142 node_conditions.go:123] node cpu capacity is 8
	I0321 22:05:14.391473  153142 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0321 22:05:14.391479  153142 node_conditions.go:123] node cpu capacity is 8
	I0321 22:05:14.391493  153142 node_conditions.go:105] duration metric: took 186.014843ms to run NodePressure ...
	I0321 22:05:14.391506  153142 start.go:228] waiting for startup goroutines ...
	I0321 22:05:14.391537  153142 start.go:242] writing updated cluster config ...
	I0321 22:05:14.391868  153142 ssh_runner.go:195] Run: rm -f paused
	I0321 22:05:14.451412  153142 start.go:554] kubectl: 1.26.3, cluster: 1.26.2 (minor skew: 0)
	I0321 22:05:14.454591  153142 out.go:177] * Done! kubectl is now configured to use "multinode-860915" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2023-03-21 22:04:10 UTC, end at Tue 2023-03-21 22:05:19 UTC. --
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206009200Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206059904Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206073175Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206111636Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206147676Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206184263Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206218518Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206263353Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206274299Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206479984Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206497570Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206898010Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.218797406Z" level=info msg="Loading containers: start."
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.295182813Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.329329652Z" level=info msg="Loading containers: done."
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.338331883Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.338393506Z" level=info msg="Daemon has completed initialization"
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.351621777Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Mar 21 22:04:14 multinode-860915 systemd[1]: Started Docker Application Container Engine.
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.358309509Z" level=info msg="API listen on [::]:2376"
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.362501525Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 21 22:04:55 multinode-860915 dockerd[940]: time="2023-03-21T22:04:55.796361892Z" level=info msg="ignoring event" container=7c6e7c8c7ec621057c81df63b5d132049342a86d796301973478bac4d02e921e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 21 22:04:55 multinode-860915 dockerd[940]: time="2023-03-21T22:04:55.801023400Z" level=info msg="ignoring event" container=deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 21 22:04:55 multinode-860915 dockerd[940]: time="2023-03-21T22:04:55.882785681Z" level=info msg="ignoring event" container=0423e44126334b2958e61f5a0eb34ce609aa11ffbe722b96e164a3d05c2e7916 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 21 22:04:55 multinode-860915 dockerd[940]: time="2023-03-21T22:04:55.883249266Z" level=info msg="ignoring event" container=e50f99e6f9cfd51059dcc8542745b29f02e9d721eff6cd9e23db2f1d61b33cc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	4491dcfe0bce9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 seconds ago       Running             busybox                   0                   fe30fe956e470
	e04dd78d38779       5185b96f0becf                                                                                         23 seconds ago      Running             coredns                   1                   82abc3f5c9e00
	f0c8fd3eab736       kindest/kindnetd@sha256:7fc2671641a1a7e7b9b8341964bd7cfe9018f497dc41d58803f88b0cc4030e07              36 seconds ago      Running             kindnet-cni               0                   54b60a934e259
	835951ebc3451       6e38f40d628db                                                                                         37 seconds ago      Running             storage-provisioner       0                   184457719bde9
	7c6e7c8c7ec62       5185b96f0becf                                                                                         37 seconds ago      Exited              coredns                   0                   0423e44126334
	a42f910ebb092       6f64e7135a6ec                                                                                         38 seconds ago      Running             kube-proxy                0                   4c2c84241a96a
	c175274409c12       db8f409d9a5d7                                                                                         57 seconds ago      Running             kube-scheduler            0                   1f08e1f037a2b
	ab8122344f03b       240e201d5b0d8                                                                                         57 seconds ago      Running             kube-controller-manager   0                   3ac6bbe183a33
	7ed7891684791       fce326961ae2d                                                                                         57 seconds ago      Running             etcd                      0                   adcbef8542b28
	ee6e07b4a24f5       63d3239c3c159                                                                                         57 seconds ago      Running             kube-apiserver            0                   419ff89329152
	
	* 
	* ==> coredns [7c6e7c8c7ec6] <==
	* [INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] 127.0.0.1:39101 - 6366 "HINFO IN 7462099254160572882.2590707446242915087. udp 57 false 512" - - 0 5.000109855s
	[ERROR] plugin/errors: 2 7462099254160572882.2590707446242915087. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:53000 - 29472 "HINFO IN 7462099254160572882.2590707446242915087. udp 57 false 512" - - 0 5.000371401s
	[ERROR] plugin/errors: 2 7462099254160572882.2590707446242915087. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	
	* 
	* ==> coredns [e04dd78d3877] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:55966 - 28057 "HINFO IN 5623652034396813179.2270563698707063122. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010892857s
	[INFO] 10.244.0.3:49401 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231647s
	[INFO] 10.244.0.3:38352 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.012664673s
	[INFO] 10.244.0.3:43959 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.03237357s
	[INFO] 10.244.0.3:34396 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.009886521s
	[INFO] 10.244.0.3:56332 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168549s
	[INFO] 10.244.0.3:43007 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008105178s
	[INFO] 10.244.0.3:34301 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000169373s
	[INFO] 10.244.0.3:55270 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115364s
	[INFO] 10.244.0.3:34913 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.007978397s
	[INFO] 10.244.0.3:33479 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131614s
	[INFO] 10.244.0.3:60727 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139266s
	[INFO] 10.244.0.3:53647 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010849s
	[INFO] 10.244.0.3:35451 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161481s
	[INFO] 10.244.0.3:54753 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128824s
	[INFO] 10.244.0.3:41932 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090076s
	[INFO] 10.244.0.3:59019 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075913s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-860915
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-860915
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b6238450160ebd3d5010da9938125282f0eedd4
	                    minikube.k8s.io/name=multinode-860915
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_03_21T22_04_29_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 21 Mar 2023 22:04:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-860915
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 21 Mar 2023 22:05:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 21 Mar 2023 22:04:59 +0000   Tue, 21 Mar 2023 22:04:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 21 Mar 2023 22:04:59 +0000   Tue, 21 Mar 2023 22:04:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 21 Mar 2023 22:04:59 +0000   Tue, 21 Mar 2023 22:04:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 21 Mar 2023 22:04:59 +0000   Tue, 21 Mar 2023 22:04:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-860915
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 b249c14bbd9147e887f6315aff00ef06
	  System UUID:                9af59bff-4966-4419-8b87-bcc5c593d400
	  Boot ID:                    527d7f15-1c0f-42e6-b299-1ad744c7814d
	  Kernel Version:             5.15.0-1030-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.2
	  Kube-Proxy Version:         v1.26.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-62ggt                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-787d4945fb-wx8p9                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     39s
	  kube-system                 etcd-multinode-860915                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         52s
	  kube-system                 kindnet-wnjrv                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      40s
	  kube-system                 kube-apiserver-multinode-860915             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-controller-manager-multinode-860915    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-proxy-97hnd                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 kube-scheduler-multinode-860915             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 38s   kube-proxy       
	  Normal  Starting                 52s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s   kubelet          Node multinode-860915 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s   kubelet          Node multinode-860915 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s   kubelet          Node multinode-860915 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  52s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                52s   kubelet          Node multinode-860915 status is now: NodeReady
	  Normal  RegisteredNode           40s   node-controller  Node multinode-860915 event: Registered Node multinode-860915 in Controller
	
	
	Name:               multinode-860915-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-860915-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 21 Mar 2023 22:05:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-860915-m02" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 21 Mar 2023 22:05:11 +0000   Tue, 21 Mar 2023 22:05:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 21 Mar 2023 22:05:11 +0000   Tue, 21 Mar 2023 22:05:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 21 Mar 2023 22:05:11 +0000   Tue, 21 Mar 2023 22:05:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 21 Mar 2023 22:05:11 +0000   Tue, 21 Mar 2023 22:05:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-860915-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 b249c14bbd9147e887f6315aff00ef06
	  System UUID:                e9904412-06ae-49de-b45e-7c9d93a2667a
	  Boot ID:                    527d7f15-1c0f-42e6-b299-1ad744c7814d
	  Kernel Version:             5.15.0-1030-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.2
	  Kube-Proxy Version:         v1.26.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-kpfz8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-mhzgv               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9s
	  kube-system                 kube-proxy-slz5b            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x2 over 10s)  kubelet          Node multinode-860915-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x2 over 10s)  kubelet          Node multinode-860915-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x2 over 10s)  kubelet          Node multinode-860915-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9s                 kubelet          Node multinode-860915-m02 status is now: NodeReady
	  Normal  RegisteredNode           5s                 node-controller  Node multinode-860915-m02 event: Registered Node multinode-860915-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.008751] FS-Cache: O-key=[8] '8aa00f0200000000'
	[  +0.006306] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007948] FS-Cache: N-cookie d=00000000a1a4eac5{9p.inode} n=00000000dcc40d61
	[  +0.008741] FS-Cache: N-key=[8] '8aa00f0200000000'
	[  +3.771261] FS-Cache: Duplicate cookie detected
	[  +0.004700] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006768] FS-Cache: O-cookie d=00000000a1a4eac5{9p.inode} n=000000000e884808
	[  +0.007355] FS-Cache: O-key=[8] '89a00f0200000000'
	[  +0.004937] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006574] FS-Cache: N-cookie d=00000000a1a4eac5{9p.inode} n=0000000022acdc3c
	[  +0.008733] FS-Cache: N-key=[8] '89a00f0200000000'
	[  +0.556387] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006759] FS-Cache: O-cookie d=00000000a1a4eac5{9p.inode} n=0000000085e909c2
	[  +0.007366] FS-Cache: O-key=[8] '93a00f0200000000'
	[  +0.004963] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006614] FS-Cache: N-cookie d=00000000a1a4eac5{9p.inode} n=0000000099ed53d5
	[  +0.007364] FS-Cache: N-key=[8] '93a00f0200000000'
	[Mar21 21:57] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Mar21 21:59] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 e9 15 e2 76 16 08 06
	[  +0.002605] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 7d 1c 33 67 c6 08 06
	[Mar21 22:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de be 01 a6 f6 f0 08 06
	
	* 
	* ==> etcd [7ed789168479] <==
	* {"level":"info","ts":"2023-03-21T22:04:22.772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-03-21T22:04:22.773Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-03-21T22:04:22.774Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-03-21T22:04:22.774Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-03-21T22:04:22.774Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-03-21T22:04:22.774Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-21T22:04:22.774Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-21T22:04:23.602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-03-21T22:04:23.602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-03-21T22:04:23.602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-03-21T22:04:23.602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-03-21T22:04:23.602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-03-21T22:04:23.602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-03-21T22:04:23.602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-03-21T22:04:23.603Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-860915 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-21T22:04:23.603Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-21T22:04:23.603Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-21T22:04:23.603Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-21T22:04:23.604Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-21T22:04:23.604Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-21T22:04:23.604Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-21T22:04:23.604Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-21T22:04:23.604Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-21T22:04:23.605Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-03-21T22:04:23.605Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  22:05:20 up 47 min,  0 users,  load average: 2.16, 2.15, 1.46
	Linux multinode-860915 5.15.0-1030-gcp #37~20.04.1-Ubuntu SMP Mon Feb 20 04:30:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [f0c8fd3eab73] <==
	* I0321 22:04:43.870241       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0321 22:04:43.870296       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0321 22:04:43.870447       1 main.go:116] setting mtu 1500 for CNI 
	I0321 22:04:43.870470       1 main.go:146] kindnetd IP family: "ipv4"
	I0321 22:04:43.870489       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0321 22:04:44.168162       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0321 22:04:44.168196       1 main.go:227] handling current node
	I0321 22:04:54.181052       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0321 22:04:54.181085       1 main.go:227] handling current node
	I0321 22:05:04.192655       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0321 22:05:04.192678       1 main.go:227] handling current node
	I0321 22:05:14.197016       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0321 22:05:14.197046       1 main.go:227] handling current node
	I0321 22:05:14.197063       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0321 22:05:14.197069       1 main.go:250] Node multinode-860915-m02 has CIDR [10.244.1.0/24] 
	I0321 22:05:14.197263       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [ee6e07b4a24f] <==
	* I0321 22:04:25.299627       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0321 22:04:25.299637       1 cache.go:39] Caches are synced for autoregister controller
	I0321 22:04:25.299735       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0321 22:04:25.299866       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0321 22:04:25.299874       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0321 22:04:25.299963       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0321 22:04:25.300837       1 shared_informer.go:280] Caches are synced for configmaps
	I0321 22:04:25.302522       1 controller.go:615] quota admission added evaluator for: namespaces
	I0321 22:04:25.313078       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0321 22:04:25.994263       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0321 22:04:26.204898       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0321 22:04:26.208406       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0321 22:04:26.208419       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0321 22:04:26.629392       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0321 22:04:26.662131       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0321 22:04:26.787922       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0321 22:04:26.793124       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0321 22:04:26.793939       1 controller.go:615] quota admission added evaluator for: endpoints
	I0321 22:04:26.797341       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0321 22:04:27.281198       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0321 22:04:28.256855       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0321 22:04:28.265783       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0321 22:04:28.273834       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0321 22:04:40.973793       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0321 22:04:41.075289       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [ab8122344f03] <==
	* I0321 22:04:40.286809       1 shared_informer.go:280] Caches are synced for job
	I0321 22:04:40.324329       1 shared_informer.go:280] Caches are synced for resource quota
	I0321 22:04:40.330466       1 shared_informer.go:280] Caches are synced for cronjob
	I0321 22:04:40.336630       1 shared_informer.go:280] Caches are synced for TTL after finished
	I0321 22:04:40.340833       1 shared_informer.go:280] Caches are synced for resource quota
	I0321 22:04:40.649331       1 shared_informer.go:280] Caches are synced for garbage collector
	I0321 22:04:40.649353       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0321 22:04:40.670096       1 shared_informer.go:280] Caches are synced for garbage collector
	I0321 22:04:40.983575       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-wnjrv"
	I0321 22:04:40.985047       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-97hnd"
	I0321 22:04:41.078943       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 2"
	I0321 22:04:41.095163       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0321 22:04:41.180642       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-69rb6"
	I0321 22:04:41.188616       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-wx8p9"
	I0321 22:04:41.280282       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-69rb6"
	W0321 22:05:11.114752       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-860915-m02" does not exist
	I0321 22:05:11.120679       1 range_allocator.go:372] Set node multinode-860915-m02 PodCIDR to [10.244.1.0/24]
	I0321 22:05:11.123318       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mhzgv"
	I0321 22:05:11.123351       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-slz5b"
	W0321 22:05:11.826761       1 topologycache.go:232] Can't get CPU or zone information for multinode-860915-m02 node
	W0321 22:05:15.102291       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-860915-m02. Assuming now as a timestamp.
	I0321 22:05:15.102331       1 event.go:294] "Event occurred" object="multinode-860915-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-860915-m02 event: Registered Node multinode-860915-m02 in Controller"
	I0321 22:05:15.477035       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0321 22:05:15.484324       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-kpfz8"
	I0321 22:05:15.489077       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-62ggt"
	
	* 
	* ==> kube-proxy [a42f910ebb09] <==
	* I0321 22:04:41.748662       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0321 22:04:41.748721       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0321 22:04:41.748742       1 server_others.go:535] "Using iptables proxy"
	I0321 22:04:41.766404       1 server_others.go:176] "Using iptables Proxier"
	I0321 22:04:41.766436       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0321 22:04:41.766444       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0321 22:04:41.766461       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0321 22:04:41.766487       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0321 22:04:41.766789       1 server.go:655] "Version info" version="v1.26.2"
	I0321 22:04:41.766805       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0321 22:04:41.767258       1 config.go:317] "Starting service config controller"
	I0321 22:04:41.767295       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0321 22:04:41.767688       1 config.go:444] "Starting node config controller"
	I0321 22:04:41.767713       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0321 22:04:41.767257       1 config.go:226] "Starting endpoint slice config controller"
	I0321 22:04:41.768601       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0321 22:04:41.868114       1 shared_informer.go:280] Caches are synced for node config
	I0321 22:04:41.868839       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0321 22:04:41.868848       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [c175274409c1] <==
	* W0321 22:04:25.285904       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0321 22:04:25.285921       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0321 22:04:25.286147       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0321 22:04:25.286169       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0321 22:04:26.096488       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0321 22:04:26.096520       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0321 22:04:26.243103       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0321 22:04:26.243127       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0321 22:04:26.279154       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0321 22:04:26.279183       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0321 22:04:26.291550       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0321 22:04:26.291590       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0321 22:04:26.321706       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0321 22:04:26.321783       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0321 22:04:26.350813       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0321 22:04:26.350852       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0321 22:04:26.363774       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0321 22:04:26.363810       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0321 22:04:26.430510       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0321 22:04:26.430539       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0321 22:04:26.444359       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0321 22:04:26.444378       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0321 22:04:26.473821       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0321 22:04:26.473851       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0321 22:04:26.782302       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2023-03-21 22:04:10 UTC, end at Tue 2023-03-21 22:05:20 UTC. --
	Mar 21 22:04:48 multinode-860915 kubelet[2309]: I0321 22:04:48.833795    2309 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 21 22:04:48 multinode-860915 kubelet[2309]: I0321 22:04:48.835036    2309 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.096380    2309 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5g5m\" (UniqueName: \"kubernetes.io/projected/a381b86f-bcde-484c-878c-056280374301-kube-api-access-j5g5m\") pod \"a381b86f-bcde-484c-878c-056280374301\" (UID: \"a381b86f-bcde-484c-878c-056280374301\") "
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.096450    2309 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a381b86f-bcde-484c-878c-056280374301-config-volume\") pod \"a381b86f-bcde-484c-878c-056280374301\" (UID: \"a381b86f-bcde-484c-878c-056280374301\") "
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: W0321 22:04:56.096630    2309 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a381b86f-bcde-484c-878c-056280374301/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.096790    2309 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a381b86f-bcde-484c-878c-056280374301-config-volume" (OuterVolumeSpecName: "config-volume") pod "a381b86f-bcde-484c-878c-056280374301" (UID: "a381b86f-bcde-484c-878c-056280374301"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.098439    2309 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a381b86f-bcde-484c-878c-056280374301-kube-api-access-j5g5m" (OuterVolumeSpecName: "kube-api-access-j5g5m") pod "a381b86f-bcde-484c-878c-056280374301" (UID: "a381b86f-bcde-484c-878c-056280374301"). InnerVolumeSpecName "kube-api-access-j5g5m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.197307    2309 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a381b86f-bcde-484c-878c-056280374301-config-volume\") on node \"multinode-860915\" DevicePath \"\""
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.197340    2309 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-j5g5m\" (UniqueName: \"kubernetes.io/projected/a381b86f-bcde-484c-878c-056280374301-kube-api-access-j5g5m\") on node \"multinode-860915\" DevicePath \"\""
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.224811    2309 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0423e44126334b2958e61f5a0eb34ce609aa11ffbe722b96e164a3d05c2e7916"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.229421    2309 scope.go:115] "RemoveContainer" containerID="deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.243073    2309 scope.go:115] "RemoveContainer" containerID="deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: E0321 22:04:56.243799    2309 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3" containerID="deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.243852    2309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3} err="failed to get container status \"deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3\": rpc error: code = Unknown desc = Error: No such container: deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: E0321 22:04:56.401038    2309 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3" containerID="deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: E0321 22:04:56.401107    2309 kuberuntime_container.go:714] "Container termination failed with gracePeriod" err="rpc error: code = Unknown desc = Error response from daemon: No such container: deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3" pod="kube-system/coredns-787d4945fb-69rb6" podUID=a381b86f-bcde-484c-878c-056280374301 containerName="coredns" containerID="docker://deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3" gracePeriod=1
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: E0321 22:04:56.401134    2309 kuberuntime_container.go:739] "Kill container failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3" pod="kube-system/coredns-787d4945fb-69rb6" podUID=a381b86f-bcde-484c-878c-056280374301 containerName="coredns" containerID={Type:docker ID:deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3}
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: E0321 22:04:56.405508    2309 kubelet.go:1874] failed to "KillContainer" for "coredns" with KillContainerError: "rpc error: code = Unknown desc = Error response from daemon: No such container: deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: E0321 22:04:56.405554    2309 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"coredns\" with KillContainerError: \"rpc error: code = Unknown desc = Error response from daemon: No such container: deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3\"" pod="kube-system/coredns-787d4945fb-69rb6" podUID=a381b86f-bcde-484c-878c-056280374301
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.407115    2309 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a381b86f-bcde-484c-878c-056280374301 path="/var/lib/kubelet/pods/a381b86f-bcde-484c-878c-056280374301/volumes"
	Mar 21 22:05:15 multinode-860915 kubelet[2309]: I0321 22:05:15.493289    2309 topology_manager.go:210] "Topology Admit Handler"
	Mar 21 22:05:15 multinode-860915 kubelet[2309]: E0321 22:05:15.493380    2309 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a381b86f-bcde-484c-878c-056280374301" containerName="coredns"
	Mar 21 22:05:15 multinode-860915 kubelet[2309]: I0321 22:05:15.493420    2309 memory_manager.go:346] "RemoveStaleState removing state" podUID="a381b86f-bcde-484c-878c-056280374301" containerName="coredns"
	Mar 21 22:05:15 multinode-860915 kubelet[2309]: I0321 22:05:15.606453    2309 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b44dp\" (UniqueName: \"kubernetes.io/projected/ebd8bedf-1c50-4a50-bb45-ad2ffcf8e054-kube-api-access-b44dp\") pod \"busybox-6b86dd6d48-62ggt\" (UID: \"ebd8bedf-1c50-4a50-bb45-ad2ffcf8e054\") " pod="default/busybox-6b86dd6d48-62ggt"
	Mar 21 22:05:17 multinode-860915 kubelet[2309]: I0321 22:05:17.366362    2309 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-62ggt" podStartSLOduration=-9.223372034488459e+09 pod.CreationTimestamp="2023-03-21 22:05:15 +0000 UTC" firstStartedPulling="2023-03-21 22:05:16.043973735 +0000 UTC m=+47.807395569" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-21 22:05:17.365874778 +0000 UTC m=+49.129296630" watchObservedRunningTime="2023-03-21 22:05:17.366316358 +0000 UTC m=+49.129738208"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-860915 -n multinode-860915
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-860915 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (5.86s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860915 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860915 -- exec busybox-6b86dd6d48-62ggt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860915 -- exec busybox-6b86dd6d48-62ggt -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-860915 -- exec busybox-6b86dd6d48-kpfz8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: minikube host ip is nil: 
** stderr ** 
	nslookup: can't resolve 'host.minikube.internal'

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-860915
helpers_test.go:235: (dbg) docker inspect multinode-860915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cea2236b98324a951f72d1a79eb9f22278797303dadcf11048b54d9c63675ddc",
	        "Created": "2023-03-21T22:04:10.120040349Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 154141,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-21T22:04:10.448384332Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/cea2236b98324a951f72d1a79eb9f22278797303dadcf11048b54d9c63675ddc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cea2236b98324a951f72d1a79eb9f22278797303dadcf11048b54d9c63675ddc/hostname",
	        "HostsPath": "/var/lib/docker/containers/cea2236b98324a951f72d1a79eb9f22278797303dadcf11048b54d9c63675ddc/hosts",
	        "LogPath": "/var/lib/docker/containers/cea2236b98324a951f72d1a79eb9f22278797303dadcf11048b54d9c63675ddc/cea2236b98324a951f72d1a79eb9f22278797303dadcf11048b54d9c63675ddc-json.log",
	        "Name": "/multinode-860915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-860915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-860915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/85c72d93c1028292580b6028ff6ac407946625266bfff9710ac971ef0ed4fc83-init/diff:/var/lib/docker/overlay2/d640a49204b62cbdd456945d4d005345a58620b62ae9b33d65049d1c993396b8/diff:/var/lib/docker/overlay2/2f69ca1a3446908a3a75abc91f938fabe5666af6aeb8015b4624852cff4cddf4/diff:/var/lib/docker/overlay2/77826550c3b08610fd851464ed2b7833a274ce77dd51835381cdb9c21b556c7d/diff:/var/lib/docker/overlay2/e15956ab42b8efa1672992b84fed94e79fbbeae307eec145f36b8093817fbc9d/diff:/var/lib/docker/overlay2/f89b982ab58387313cef069aedcdc102b85e2564f1414edf0b099b6d06e8d760/diff:/var/lib/docker/overlay2/7327a750743ed9373f2f5681004c7795a4b64f5704efcb57ee5e29ab3757844d/diff:/var/lib/docker/overlay2/01a1b9b43163306f4f6240c5ba892c673598ce3b971a08a0a97fe0f5239214db/diff:/var/lib/docker/overlay2/bf4d055c40227bfa14f549a18ded4142ad9306fd9458230679aaa4900118281b/diff:/var/lib/docker/overlay2/74353c9e2d5dc25abe40c7777e0c09395af27bb7a8a3e18498bf7904846b7f11/diff:/var/lib/docker/overlay2/fe3c07
69566c45228c4f0c59f9f20d0a974e3493d4468ee806436c1fbc085a8f/diff:/var/lib/docker/overlay2/05557f82a049377810342eaad5167446fffe852231ffc75334cb98105537c915/diff:/var/lib/docker/overlay2/0d8fe544a42c85fa45a0902d36c933192fa8315b60a92c196f5a416ea55bebc2/diff:/var/lib/docker/overlay2/33a688c843fa0b7360dc919ee277bdcda578b2c83406f9fc5cf8859bb20439fe/diff:/var/lib/docker/overlay2/627d3c89f753c6719656c148f1bb6c9bb4a106753297be3a3ba7efe924e398e3/diff:/var/lib/docker/overlay2/06bb92ecbc5497dbcd6cb4f5e86dbad24fe99e250f8004cec607f90003af9137/diff:/var/lib/docker/overlay2/f42dc72746bc9fc6065a63dee52653a285840ff1dc5ee7aad14b1e7cafb0475a/diff:/var/lib/docker/overlay2/5c4e7423869c8634195ccedb2ad5c6fad22fbda5790e5761deb4f224967328eb/diff:/var/lib/docker/overlay2/8691a3dd8c2958c2e4bace0a06058278ebdb723d8409115a9cdbe0f792fa44c5/diff:/var/lib/docker/overlay2/85ac776fea5b19e198189e15e3f73cc69377ed4c78d93b05d1a9c2a8deaa4747/diff:/var/lib/docker/overlay2/3b812c4414ef925986f30a5a9b882e40f03ee837f5d194ac8aa3057989b7105d/diff:/var/lib/d
ocker/overlay2/5f251bd97e750c54f037dd59f3f4f8d219015fb750d9fdc760c223b79dcafa21/diff:/var/lib/docker/overlay2/5666d5e81373d3ebaec11d38c62fe4981ec0220630d33665e71d5ea1d81809ca/diff:/var/lib/docker/overlay2/5792555264d0709dc72fe4cb189ff0d80530c20670d4d9f056cf6c03792d9b30/diff:/var/lib/docker/overlay2/9c49a7384955edf6efb79c488e86f1d4b40cc81a57cf0d014243e0863d095054/diff:/var/lib/docker/overlay2/6e36cdb4177b2c7d49fe1ee1b4dc25a61c12fee623506ab19552e2ae8742235d/diff:/var/lib/docker/overlay2/9e066b30f26fd8c76ac87635130e55dfcc7c8865a5039b98735ee1c04266b065/diff:/var/lib/docker/overlay2/3612d1a6aa3e0e19293da12b3afe1e28522248ab181db4001942f6bf17eee0af/diff:/var/lib/docker/overlay2/70c7cccc16141dac653158cec14640abd099db393b3c90bb1af94507efed3f2b/diff:/var/lib/docker/overlay2/36f9d3b3eb7d184762018b2ccf3682c78b2997003c52951acdf3d57d5f668513/diff:/var/lib/docker/overlay2/78a72850c17cb0d5bc811600906e759868b4f83bd5e24b09c8f8410c52bc05bc/diff:/var/lib/docker/overlay2/37f6e96371561f325d1feb1c6af3e0f272fddd9f52bf01b650fe3281d6a
900ef/diff:/var/lib/docker/overlay2/c1c9ec5dbda3b0a53eb1c677bb46ec48a6851a8b8e3cfc5dbd46221b31aa3f1e/diff:/var/lib/docker/overlay2/e1648eef5200cf93385333f2ec689b44c2bcda83e09ab0386b268b42beb65592/diff:/var/lib/docker/overlay2/db69e6572eda782bd31d2ce0e4712d18c6aea21e1c07fa3db7316703d4134d66/diff:/var/lib/docker/overlay2/886164b1ee98c6ea98e2b536acfb100958b631d7f0a728ef6d5df5ad3a6200d8/diff:/var/lib/docker/overlay2/1181f6b9ebac02faf5e0f7cc46878bd68f6445538dd92f2d543b5044f6d18086/diff:/var/lib/docker/overlay2/47c234d852a21d6a8d75aa0538ac1629cef07994d3203f204c35c6331320983e/diff:/var/lib/docker/overlay2/ab6bf2566e0df27f85567c3169d4a2484be237abe237f02da1d144eae02eb2a6/diff:/var/lib/docker/overlay2/fffc78b1cef142380fa7093faac387c6bc601298db1fbff1b96978271b9aedb1/diff:/var/lib/docker/overlay2/00e44020633971002e0909c887ed95319d30728c1006a38c1aa9478ca2e20349/diff:/var/lib/docker/overlay2/2428fd5b50e8eec63b30fd533c95c5a69a1f50c93cad081ab7d4919549d7dfca/diff:/var/lib/docker/overlay2/82d0bc3e11a8925c2176e9c8536f8b1a628915
149712d6c580274cce9c037a7a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85c72d93c1028292580b6028ff6ac407946625266bfff9710ac971ef0ed4fc83/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85c72d93c1028292580b6028ff6ac407946625266bfff9710ac971ef0ed4fc83/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85c72d93c1028292580b6028ff6ac407946625266bfff9710ac971ef0ed4fc83/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-860915",
	                "Source": "/var/lib/docker/volumes/multinode-860915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-860915",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-860915",
	                "name.minikube.sigs.k8s.io": "multinode-860915",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4b5b5d79b7895400795904c26cab7b38fbbe49979122c2a962e47f82bb4fb74a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32852"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32851"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32848"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32850"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32849"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4b5b5d79b789",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-860915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cea2236b9832",
	                        "multinode-860915"
	                    ],
	                    "NetworkID": "4324ac65e556d4a4c34c0ca93ae29a3fc50c655ceca50aaea9b992e52a60d35d",
	                    "EndpointID": "e239b70323757b86270de8c8fad89b6de84738686066dbc427155b96dc44a251",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-860915 -n multinode-860915
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-860915 logs -n 25: (1.050921306s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-1-353660 ssh -- ls                    | mount-start-1-353660 | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| start   | -p mount-start-2-370454                           | mount-start-2-370454 | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| ssh     | mount-start-2-370454 ssh -- ls                    | mount-start-2-370454 | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-353660                           | mount-start-1-353660 | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-370454 ssh -- ls                    | mount-start-2-370454 | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-370454                           | mount-start-2-370454 | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	| start   | -p mount-start-2-370454                           | mount-start-2-370454 | jenkins | v1.29.0 | 21 Mar 23 22:03 UTC | 21 Mar 23 22:03 UTC |
	| ssh     | mount-start-2-370454 ssh -- ls                    | mount-start-2-370454 | jenkins | v1.29.0 | 21 Mar 23 22:04 UTC | 21 Mar 23 22:04 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-370454                           | mount-start-2-370454 | jenkins | v1.29.0 | 21 Mar 23 22:04 UTC | 21 Mar 23 22:04 UTC |
	| delete  | -p mount-start-1-353660                           | mount-start-1-353660 | jenkins | v1.29.0 | 21 Mar 23 22:04 UTC | 21 Mar 23 22:04 UTC |
	| start   | -p multinode-860915                               | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:04 UTC | 21 Mar 23 22:05 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- apply -f                   | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- rollout                    | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- get pods -o                | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- get pods -o                | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- exec                       | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | busybox-6b86dd6d48-62ggt --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- exec                       | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC |                     |
	|         | busybox-6b86dd6d48-kpfz8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- exec                       | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | busybox-6b86dd6d48-62ggt --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- exec                       | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC |                     |
	|         | busybox-6b86dd6d48-kpfz8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- exec                       | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | busybox-6b86dd6d48-62ggt -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- exec                       | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC |                     |
	|         | busybox-6b86dd6d48-kpfz8 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- get pods -o                | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- exec                       | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | busybox-6b86dd6d48-62ggt                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- exec                       | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | busybox-6b86dd6d48-62ggt -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-860915 -- exec                       | multinode-860915     | jenkins | v1.29.0 | 21 Mar 23 22:05 UTC | 21 Mar 23 22:05 UTC |
	|         | busybox-6b86dd6d48-kpfz8                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/21 22:04:03
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0321 22:04:03.744505  153142 out.go:296] Setting OutFile to fd 1 ...
	I0321 22:04:03.744728  153142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:04:03.744738  153142 out.go:309] Setting ErrFile to fd 2...
	I0321 22:04:03.744742  153142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:04:03.744841  153142 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-3841/.minikube/bin
	I0321 22:04:03.745387  153142 out.go:303] Setting JSON to false
	I0321 22:04:03.746750  153142 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2796,"bootTime":1679433448,"procs":891,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0321 22:04:03.746811  153142 start.go:135] virtualization: kvm guest
	I0321 22:04:03.749615  153142 out.go:177] * [multinode-860915] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0321 22:04:03.751151  153142 out.go:177]   - MINIKUBE_LOCATION=16124
	I0321 22:04:03.751166  153142 notify.go:220] Checking for updates...
	I0321 22:04:03.752884  153142 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0321 22:04:03.754552  153142 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 22:04:03.756081  153142 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-3841/.minikube
	I0321 22:04:03.757403  153142 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0321 22:04:03.758749  153142 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0321 22:04:03.760175  153142 driver.go:365] Setting default libvirt URI to qemu:///system
	I0321 22:04:03.827539  153142 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0321 22:04:03.827662  153142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0321 22:04:03.945489  153142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:32 SystemTime:2023-03-21 22:04:03.936970375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0321 22:04:03.945585  153142 docker.go:294] overlay module found
	I0321 22:04:03.947515  153142 out.go:177] * Using the docker driver based on user configuration
	I0321 22:04:03.949298  153142 start.go:295] selected driver: docker
	I0321 22:04:03.949309  153142 start.go:856] validating driver "docker" against <nil>
	I0321 22:04:03.949318  153142 start.go:867] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0321 22:04:03.950009  153142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0321 22:04:04.065279  153142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:32 SystemTime:2023-03-21 22:04:04.057187513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0321 22:04:04.065407  153142 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0321 22:04:04.065683  153142 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0321 22:04:04.067535  153142 out.go:177] * Using Docker driver with root privileges
	I0321 22:04:04.068931  153142 cni.go:84] Creating CNI manager for ""
	I0321 22:04:04.068947  153142 cni.go:136] 0 nodes found, recommending kindnet
	I0321 22:04:04.068954  153142 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0321 22:04:04.068965  153142 start_flags.go:319] config:
	{Name:multinode-860915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-860915 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 22:04:04.070684  153142 out.go:177] * Starting control plane node multinode-860915 in cluster multinode-860915
	I0321 22:04:04.072065  153142 cache.go:120] Beginning downloading kic base image for docker with docker
	I0321 22:04:04.073462  153142 out.go:177] * Pulling base image ...
	I0321 22:04:04.074806  153142 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0321 22:04:04.074838  153142 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0321 22:04:04.074847  153142 cache.go:57] Caching tarball of preloaded images
	I0321 22:04:04.074899  153142 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0321 22:04:04.074915  153142 preload.go:174] Found /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0321 22:04:04.074925  153142 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0321 22:04:04.075216  153142 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/config.json ...
	I0321 22:04:04.075236  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/config.json: {Name:mk88dbb8da7413ed3f2bbb1b1a154d821228fcde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:04.138767  153142 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0321 22:04:04.138792  153142 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0321 22:04:04.138810  153142 cache.go:193] Successfully downloaded all kic artifacts
	I0321 22:04:04.138846  153142 start.go:364] acquiring machines lock for multinode-860915: {Name:mk71a5a6463f94b190d019928f9ca0fdae04ca58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0321 22:04:04.138941  153142 start.go:368] acquired machines lock for "multinode-860915" in 76.848µs
	I0321 22:04:04.138964  153142 start.go:93] Provisioning new machine with config: &{Name:multinode-860915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-860915 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0321 22:04:04.139058  153142 start.go:125] createHost starting for "" (driver="docker")
	I0321 22:04:04.141298  153142 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0321 22:04:04.141509  153142 start.go:159] libmachine.API.Create for "multinode-860915" (driver="docker")
	I0321 22:04:04.141536  153142 client.go:168] LocalClient.Create starting
	I0321 22:04:04.141630  153142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem
	I0321 22:04:04.141660  153142 main.go:141] libmachine: Decoding PEM data...
	I0321 22:04:04.141676  153142 main.go:141] libmachine: Parsing certificate...
	I0321 22:04:04.141727  153142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem
	I0321 22:04:04.141746  153142 main.go:141] libmachine: Decoding PEM data...
	I0321 22:04:04.141754  153142 main.go:141] libmachine: Parsing certificate...
	I0321 22:04:04.142063  153142 cli_runner.go:164] Run: docker network inspect multinode-860915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0321 22:04:04.204809  153142 cli_runner.go:211] docker network inspect multinode-860915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0321 22:04:04.204884  153142 network_create.go:281] running [docker network inspect multinode-860915] to gather additional debugging logs...
	I0321 22:04:04.204905  153142 cli_runner.go:164] Run: docker network inspect multinode-860915
	W0321 22:04:04.265273  153142 cli_runner.go:211] docker network inspect multinode-860915 returned with exit code 1
	I0321 22:04:04.265308  153142 network_create.go:284] error running [docker network inspect multinode-860915]: docker network inspect multinode-860915: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-860915 not found
	I0321 22:04:04.265322  153142 network_create.go:286] output of [docker network inspect multinode-860915]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-860915 not found
	
	** /stderr **
	I0321 22:04:04.265370  153142 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0321 22:04:04.331927  153142 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-068451d2c467 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:66:9e:04:2f} reservation:<nil>}
	I0321 22:04:04.332388  153142 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001614640}
	I0321 22:04:04.332419  153142 network_create.go:123] attempt to create docker network multinode-860915 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0321 22:04:04.332458  153142 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-860915 multinode-860915
	I0321 22:04:04.431448  153142 network_create.go:107] docker network multinode-860915 192.168.58.0/24 created
	I0321 22:04:04.431475  153142 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-860915" container
	I0321 22:04:04.431523  153142 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0321 22:04:04.494901  153142 cli_runner.go:164] Run: docker volume create multinode-860915 --label name.minikube.sigs.k8s.io=multinode-860915 --label created_by.minikube.sigs.k8s.io=true
	I0321 22:04:04.559292  153142 oci.go:103] Successfully created a docker volume multinode-860915
	I0321 22:04:04.559366  153142 cli_runner.go:164] Run: docker run --rm --name multinode-860915-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-860915 --entrypoint /usr/bin/test -v multinode-860915:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
	I0321 22:04:05.132509  153142 oci.go:107] Successfully prepared a docker volume multinode-860915
	I0321 22:04:05.132565  153142 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0321 22:04:05.132591  153142 kic.go:190] Starting extracting preloaded images to volume ...
	I0321 22:04:05.132656  153142 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-860915:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
	I0321 22:04:09.938515  153142 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-860915:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir: (4.805787078s)
	I0321 22:04:09.938551  153142 kic.go:199] duration metric: took 4.805957 seconds to extract preloaded images to volume
	W0321 22:04:09.938706  153142 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0321 22:04:09.938811  153142 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0321 22:04:10.056157  153142 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-860915 --name multinode-860915 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-860915 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-860915 --network multinode-860915 --ip 192.168.58.2 --volume multinode-860915:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978
	I0321 22:04:10.456490  153142 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Running}}
	I0321 22:04:10.525789  153142 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Status}}
	I0321 22:04:10.592592  153142 cli_runner.go:164] Run: docker exec multinode-860915 stat /var/lib/dpkg/alternatives/iptables
	I0321 22:04:10.709843  153142 oci.go:144] the created container "multinode-860915" has a running status.
	I0321 22:04:10.709884  153142 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa...
	I0321 22:04:10.753358  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0321 22:04:10.753416  153142 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0321 22:04:10.876805  153142 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Status}}
	I0321 22:04:10.947350  153142 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0321 22:04:10.947375  153142 kic_runner.go:114] Args: [docker exec --privileged multinode-860915 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0321 22:04:11.066972  153142 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Status}}
	I0321 22:04:11.136867  153142 machine.go:88] provisioning docker machine ...
	I0321 22:04:11.136929  153142 ubuntu.go:169] provisioning hostname "multinode-860915"
	I0321 22:04:11.136983  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:11.199873  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:04:11.200308  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0321 22:04:11.200325  153142 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-860915 && echo "multinode-860915" | sudo tee /etc/hostname
	I0321 22:04:11.322112  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-860915
	
	I0321 22:04:11.322184  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:11.387413  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:04:11.387822  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0321 22:04:11.387841  153142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-860915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-860915/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-860915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0321 22:04:11.501517  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0321 22:04:11.501546  153142 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16124-3841/.minikube CaCertPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16124-3841/.minikube}
	I0321 22:04:11.501578  153142 ubuntu.go:177] setting up certificates
	I0321 22:04:11.501587  153142 provision.go:83] configureAuth start
	I0321 22:04:11.501636  153142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860915
	I0321 22:04:11.565024  153142 provision.go:138] copyHostCerts
	I0321 22:04:11.565060  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem
	I0321 22:04:11.565090  153142 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem, removing ...
	I0321 22:04:11.565098  153142 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem
	I0321 22:04:11.565162  153142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem (1082 bytes)
	I0321 22:04:11.565234  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem
	I0321 22:04:11.565252  153142 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem, removing ...
	I0321 22:04:11.565256  153142 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem
	I0321 22:04:11.565281  153142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem (1123 bytes)
	I0321 22:04:11.565321  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem
	I0321 22:04:11.565336  153142 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem, removing ...
	I0321 22:04:11.565342  153142 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem
	I0321 22:04:11.565364  153142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem (1675 bytes)
	I0321 22:04:11.565408  153142 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca-key.pem org=jenkins.multinode-860915 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-860915]
	I0321 22:04:11.666578  153142 provision.go:172] copyRemoteCerts
	I0321 22:04:11.666641  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0321 22:04:11.666673  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:11.733368  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:04:11.816832  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0321 22:04:11.816886  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0321 22:04:11.833484  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0321 22:04:11.833540  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0321 22:04:11.849328  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0321 22:04:11.849387  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0321 22:04:11.865155  153142 provision.go:86] duration metric: configureAuth took 363.554942ms
	I0321 22:04:11.865181  153142 ubuntu.go:193] setting minikube options for container-runtime
	I0321 22:04:11.865342  153142 config.go:182] Loaded profile config "multinode-860915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0321 22:04:11.865387  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:11.927979  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:04:11.928380  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0321 22:04:11.928394  153142 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0321 22:04:12.041503  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0321 22:04:12.041535  153142 ubuntu.go:71] root file system type: overlay
	I0321 22:04:12.041679  153142 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0321 22:04:12.041751  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:12.106236  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:04:12.106635  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0321 22:04:12.106695  153142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0321 22:04:12.225943  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0321 22:04:12.226039  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:12.288216  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:04:12.288677  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0321 22:04:12.288698  153142 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0321 22:04:12.901398  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-21 22:04:12.219688201 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0321 22:04:12.901432  153142 machine.go:91] provisioned docker machine in 1.764541806s
	I0321 22:04:12.901442  153142 client.go:171] LocalClient.Create took 8.759901007s
	I0321 22:04:12.901465  153142 start.go:167] duration metric: libmachine.API.Create for "multinode-860915" took 8.75995422s
	I0321 22:04:12.901477  153142 start.go:300] post-start starting for "multinode-860915" (driver="docker")
	I0321 22:04:12.901484  153142 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0321 22:04:12.901551  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0321 22:04:12.901599  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:12.964498  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:04:13.053201  153142 ssh_runner.go:195] Run: cat /etc/os-release
	I0321 22:04:13.055725  153142 command_runner.go:130] > NAME="Ubuntu"
	I0321 22:04:13.055743  153142 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0321 22:04:13.055747  153142 command_runner.go:130] > ID=ubuntu
	I0321 22:04:13.055752  153142 command_runner.go:130] > ID_LIKE=debian
	I0321 22:04:13.055756  153142 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0321 22:04:13.055760  153142 command_runner.go:130] > VERSION_ID="20.04"
	I0321 22:04:13.055767  153142 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0321 22:04:13.055774  153142 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0321 22:04:13.055782  153142 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0321 22:04:13.055798  153142 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0321 22:04:13.055811  153142 command_runner.go:130] > VERSION_CODENAME=focal
	I0321 22:04:13.055817  153142 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0321 22:04:13.055885  153142 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0321 22:04:13.055898  153142 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0321 22:04:13.055906  153142 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0321 22:04:13.055912  153142 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0321 22:04:13.055920  153142 filesync.go:126] Scanning /home/jenkins/minikube-integration/16124-3841/.minikube/addons for local assets ...
	I0321 22:04:13.055960  153142 filesync.go:126] Scanning /home/jenkins/minikube-integration/16124-3841/.minikube/files for local assets ...
	I0321 22:04:13.056024  153142 filesync.go:149] local asset: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem -> 105322.pem in /etc/ssl/certs
	I0321 22:04:13.056034  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem -> /etc/ssl/certs/105322.pem
	I0321 22:04:13.056109  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0321 22:04:13.062191  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem --> /etc/ssl/certs/105322.pem (1708 bytes)
	I0321 22:04:13.078761  153142 start.go:303] post-start completed in 177.272012ms
	I0321 22:04:13.079110  153142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860915
	I0321 22:04:13.142344  153142 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/config.json ...
	I0321 22:04:13.142577  153142 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0321 22:04:13.142614  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:13.206109  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:04:13.285871  153142 command_runner.go:130] > 17%!
	(MISSING)I0321 22:04:13.286068  153142 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0321 22:04:13.289661  153142 command_runner.go:130] > 244G
	I0321 22:04:13.289689  153142 start.go:128] duration metric: createHost completed in 9.150623099s
	I0321 22:04:13.289700  153142 start.go:83] releasing machines lock for "multinode-860915", held for 9.150747768s
	I0321 22:04:13.289768  153142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860915
	I0321 22:04:13.350474  153142 ssh_runner.go:195] Run: cat /version.json
	I0321 22:04:13.350529  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:13.350487  153142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0321 22:04:13.350633  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:13.420136  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:04:13.421481  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:04:13.535295  153142 command_runner.go:130] > {"iso_version": "v1.29.0-1678210391-15973", "kicbase_version": "v0.0.37-1679075007-16079", "minikube_version": "v1.29.0", "commit": "e88c2b31272b40b6ab7f12032e3d1be586055049"}
	I0321 22:04:13.535383  153142 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0321 22:04:13.535456  153142 ssh_runner.go:195] Run: systemctl --version
	I0321 22:04:13.538764  153142 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.20)
	I0321 22:04:13.538796  153142 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0321 22:04:13.538929  153142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0321 22:04:13.542668  153142 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0321 22:04:13.542694  153142 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0321 22:04:13.542705  153142 command_runner.go:130] > Device: 36h/54d	Inode: 1322525     Links: 1
	I0321 22:04:13.542716  153142 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0321 22:04:13.542732  153142 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0321 22:04:13.542745  153142 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0321 22:04:13.542754  153142 command_runner.go:130] > Change: 2023-03-21 21:49:53.137271995 +0000
	I0321 22:04:13.542762  153142 command_runner.go:130] >  Birth: -
	I0321 22:04:13.542818  153142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0321 22:04:13.561901  153142 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0321 22:04:13.561956  153142 ssh_runner.go:195] Run: which cri-dockerd
	I0321 22:04:13.564471  153142 command_runner.go:130] > /usr/bin/cri-dockerd
	I0321 22:04:13.564630  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0321 22:04:13.571035  153142 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0321 22:04:13.582842  153142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0321 22:04:13.597154  153142 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0321 22:04:13.597181  153142 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0321 22:04:13.597198  153142 start.go:485] detecting cgroup driver to use...
	I0321 22:04:13.597231  153142 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0321 22:04:13.597338  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0321 22:04:13.608700  153142 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0321 22:04:13.608778  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0321 22:04:13.616003  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0321 22:04:13.623081  153142 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0321 22:04:13.623121  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0321 22:04:13.630219  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0321 22:04:13.637084  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0321 22:04:13.643854  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0321 22:04:13.650879  153142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0321 22:04:13.657544  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0321 22:04:13.666967  153142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0321 22:04:13.672922  153142 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0321 22:04:13.672976  153142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0321 22:04:13.678867  153142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0321 22:04:13.752695  153142 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0321 22:04:13.837727  153142 start.go:485] detecting cgroup driver to use...
	I0321 22:04:13.837776  153142 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0321 22:04:13.837828  153142 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0321 22:04:13.846591  153142 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0321 22:04:13.846616  153142 command_runner.go:130] > [Unit]
	I0321 22:04:13.846626  153142 command_runner.go:130] > Description=Docker Application Container Engine
	I0321 22:04:13.846635  153142 command_runner.go:130] > Documentation=https://docs.docker.com
	I0321 22:04:13.846640  153142 command_runner.go:130] > BindsTo=containerd.service
	I0321 22:04:13.846646  153142 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0321 22:04:13.846650  153142 command_runner.go:130] > Wants=network-online.target
	I0321 22:04:13.846655  153142 command_runner.go:130] > Requires=docker.socket
	I0321 22:04:13.846659  153142 command_runner.go:130] > StartLimitBurst=3
	I0321 22:04:13.846663  153142 command_runner.go:130] > StartLimitIntervalSec=60
	I0321 22:04:13.846667  153142 command_runner.go:130] > [Service]
	I0321 22:04:13.846672  153142 command_runner.go:130] > Type=notify
	I0321 22:04:13.846680  153142 command_runner.go:130] > Restart=on-failure
	I0321 22:04:13.846697  153142 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0321 22:04:13.846712  153142 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0321 22:04:13.846726  153142 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0321 22:04:13.846740  153142 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0321 22:04:13.846755  153142 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0321 22:04:13.846765  153142 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0321 22:04:13.846775  153142 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0321 22:04:13.846789  153142 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0321 22:04:13.846800  153142 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0321 22:04:13.846809  153142 command_runner.go:130] > ExecStart=
	I0321 22:04:13.846832  153142 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0321 22:04:13.846846  153142 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0321 22:04:13.846859  153142 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0321 22:04:13.846886  153142 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0321 22:04:13.846893  153142 command_runner.go:130] > LimitNOFILE=infinity
	I0321 22:04:13.846903  153142 command_runner.go:130] > LimitNPROC=infinity
	I0321 22:04:13.846910  153142 command_runner.go:130] > LimitCORE=infinity
	I0321 22:04:13.846919  153142 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0321 22:04:13.846932  153142 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0321 22:04:13.846942  153142 command_runner.go:130] > TasksMax=infinity
	I0321 22:04:13.846952  153142 command_runner.go:130] > TimeoutStartSec=0
	I0321 22:04:13.846963  153142 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0321 22:04:13.846973  153142 command_runner.go:130] > Delegate=yes
	I0321 22:04:13.846985  153142 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0321 22:04:13.846995  153142 command_runner.go:130] > KillMode=process
	I0321 22:04:13.847012  153142 command_runner.go:130] > [Install]
	I0321 22:04:13.847023  153142 command_runner.go:130] > WantedBy=multi-user.target
	I0321 22:04:13.847560  153142 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0321 22:04:13.847620  153142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0321 22:04:13.858227  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0321 22:04:13.869990  153142 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0321 22:04:13.870956  153142 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0321 22:04:13.976269  153142 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0321 22:04:14.053856  153142 docker.go:531] configuring docker to use "cgroupfs" as cgroup driver...
	I0321 22:04:14.053889  153142 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0321 22:04:14.078322  153142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0321 22:04:14.149044  153142 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0321 22:04:14.353281  153142 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0321 22:04:14.433535  153142 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0321 22:04:14.433604  153142 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0321 22:04:14.512865  153142 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0321 22:04:14.593622  153142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0321 22:04:14.669419  153142 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0321 22:04:14.680008  153142 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0321 22:04:14.680079  153142 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0321 22:04:14.682928  153142 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0321 22:04:14.682947  153142 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0321 22:04:14.682954  153142 command_runner.go:130] > Device: 3fh/63d	Inode: 206         Links: 1
	I0321 22:04:14.682964  153142 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0321 22:04:14.682975  153142 command_runner.go:130] > Access: 2023-03-21 22:04:14.671934836 +0000
	I0321 22:04:14.682983  153142 command_runner.go:130] > Modify: 2023-03-21 22:04:14.671934836 +0000
	I0321 22:04:14.682990  153142 command_runner.go:130] > Change: 2023-03-21 22:04:14.675935238 +0000
	I0321 22:04:14.683001  153142 command_runner.go:130] >  Birth: -
	I0321 22:04:14.683023  153142 start.go:553] Will wait 60s for crictl version
	I0321 22:04:14.683056  153142 ssh_runner.go:195] Run: which crictl
	I0321 22:04:14.685524  153142 command_runner.go:130] > /usr/bin/crictl
	I0321 22:04:14.685575  153142 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0321 22:04:14.762372  153142 command_runner.go:130] > Version:  0.1.0
	I0321 22:04:14.762392  153142 command_runner.go:130] > RuntimeName:  docker
	I0321 22:04:14.762398  153142 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0321 22:04:14.762406  153142 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0321 22:04:14.762423  153142 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0321 22:04:14.762467  153142 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0321 22:04:14.784502  153142 command_runner.go:130] > 23.0.1
	I0321 22:04:14.784590  153142 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0321 22:04:14.805223  153142 command_runner.go:130] > 23.0.1
	I0321 22:04:14.808863  153142 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
	I0321 22:04:14.808942  153142 cli_runner.go:164] Run: docker network inspect multinode-860915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0321 22:04:14.871231  153142 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0321 22:04:14.874308  153142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0321 22:04:14.883138  153142 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0321 22:04:14.883193  153142 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0321 22:04:14.899734  153142 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0321 22:04:14.899760  153142 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0321 22:04:14.899767  153142 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0321 22:04:14.899775  153142 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0321 22:04:14.899783  153142 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0321 22:04:14.899790  153142 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0321 22:04:14.899798  153142 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0321 22:04:14.899807  153142 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0321 22:04:14.900804  153142 docker.go:632] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0321 22:04:14.900830  153142 docker.go:562] Images already preloaded, skipping extraction
	I0321 22:04:14.900883  153142 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0321 22:04:14.919573  153142 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.2
	I0321 22:04:14.919596  153142 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.2
	I0321 22:04:14.919601  153142 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.2
	I0321 22:04:14.919606  153142 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.2
	I0321 22:04:14.919611  153142 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0321 22:04:14.919615  153142 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0321 22:04:14.919619  153142 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0321 22:04:14.919625  153142 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0321 22:04:14.920842  153142 docker.go:632] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.2
	registry.k8s.io/kube-scheduler:v1.26.2
	registry.k8s.io/kube-controller-manager:v1.26.2
	registry.k8s.io/kube-proxy:v1.26.2
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0321 22:04:14.920865  153142 cache_images.go:84] Images are preloaded, skipping loading
	I0321 22:04:14.920911  153142 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0321 22:04:14.942590  153142 command_runner.go:130] > cgroupfs
	I0321 22:04:14.942631  153142 cni.go:84] Creating CNI manager for ""
	I0321 22:04:14.942640  153142 cni.go:136] 1 nodes found, recommending kindnet
	I0321 22:04:14.942656  153142 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0321 22:04:14.942672  153142 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-860915 NodeName:multinode-860915 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0321 22:04:14.942784  153142 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-860915"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0321 22:04:14.942881  153142 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-860915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:multinode-860915 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0321 22:04:14.942925  153142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0321 22:04:14.948975  153142 command_runner.go:130] > kubeadm
	I0321 22:04:14.948988  153142 command_runner.go:130] > kubectl
	I0321 22:04:14.948993  153142 command_runner.go:130] > kubelet
	I0321 22:04:14.949516  153142 binaries.go:44] Found k8s binaries, skipping transfer
	I0321 22:04:14.949580  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0321 22:04:14.955892  153142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0321 22:04:14.967690  153142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0321 22:04:14.979556  153142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0321 22:04:14.991317  153142 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0321 22:04:14.994079  153142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0321 22:04:15.002648  153142 certs.go:56] Setting up /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915 for IP: 192.168.58.2
	I0321 22:04:15.002686  153142 certs.go:186] acquiring lock for shared ca certs: {Name:mke51456f2089c678c8a8085b7dd3883448bd6b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:15.002813  153142 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.key
	I0321 22:04:15.002853  153142 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.key
	I0321 22:04:15.002902  153142 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key
	I0321 22:04:15.002913  153142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt with IP's: []
	I0321 22:04:15.127952  153142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt ...
	I0321 22:04:15.127979  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt: {Name:mk08d3ec2c118c71923c4a509551dcfa9361e19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:15.128131  153142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key ...
	I0321 22:04:15.128145  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key: {Name:mk072e10625389a05c2d097d968f2cb300fdc41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:15.128217  153142 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.key.cee25041
	I0321 22:04:15.128234  153142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0321 22:04:15.250113  153142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.crt.cee25041 ...
	I0321 22:04:15.250144  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.crt.cee25041: {Name:mk3eede047e77adf4f04779d508cf7739e315510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:15.250294  153142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.key.cee25041 ...
	I0321 22:04:15.250305  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.key.cee25041: {Name:mk672094b844df5edd66a3029ed0f0575a93df11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:15.250375  153142 certs.go:333] copying /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.crt
	I0321 22:04:15.250434  153142 certs.go:337] copying /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.key
	I0321 22:04:15.250480  153142 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.key
	I0321 22:04:15.250492  153142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.crt with IP's: []
	I0321 22:04:15.392358  153142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.crt ...
	I0321 22:04:15.392389  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.crt: {Name:mk24f3246f94d8a0d04a4cd8ba3a4340840af825 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:15.392539  153142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.key ...
	I0321 22:04:15.392550  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.key: {Name:mk516f91b566d06a1f166ce5c17af69261bf9a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:15.392614  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0321 22:04:15.392631  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0321 22:04:15.392642  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0321 22:04:15.392654  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0321 22:04:15.392666  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0321 22:04:15.392680  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0321 22:04:15.392694  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0321 22:04:15.392706  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0321 22:04:15.392753  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532.pem (1338 bytes)
	W0321 22:04:15.392785  153142 certs.go:397] ignoring /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532_empty.pem, impossibly tiny 0 bytes
	I0321 22:04:15.392797  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca-key.pem (1675 bytes)
	I0321 22:04:15.392820  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem (1082 bytes)
	I0321 22:04:15.392842  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem (1123 bytes)
	I0321 22:04:15.392863  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem (1675 bytes)
	I0321 22:04:15.392904  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem (1708 bytes)
	I0321 22:04:15.392927  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532.pem -> /usr/share/ca-certificates/10532.pem
	I0321 22:04:15.392940  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem -> /usr/share/ca-certificates/105322.pem
	I0321 22:04:15.392953  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:04:15.393461  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0321 22:04:15.410275  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0321 22:04:15.426112  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0321 22:04:15.441734  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0321 22:04:15.457375  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0321 22:04:15.473102  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0321 22:04:15.488716  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0321 22:04:15.504671  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0321 22:04:15.520658  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532.pem --> /usr/share/ca-certificates/10532.pem (1338 bytes)
	I0321 22:04:15.536364  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem --> /usr/share/ca-certificates/105322.pem (1708 bytes)
	I0321 22:04:15.551891  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0321 22:04:15.568292  153142 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0321 22:04:15.579982  153142 ssh_runner.go:195] Run: openssl version
	I0321 22:04:15.584195  153142 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0321 22:04:15.584322  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105322.pem && ln -fs /usr/share/ca-certificates/105322.pem /etc/ssl/certs/105322.pem"
	I0321 22:04:15.591218  153142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105322.pem
	I0321 22:04:15.593858  153142 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 21 21:53 /usr/share/ca-certificates/105322.pem
	I0321 22:04:15.593906  153142 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 21 21:53 /usr/share/ca-certificates/105322.pem
	I0321 22:04:15.593942  153142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105322.pem
	I0321 22:04:15.598198  153142 command_runner.go:130] > 3ec20f2e
	I0321 22:04:15.598263  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105322.pem /etc/ssl/certs/3ec20f2e.0"
	I0321 22:04:15.604852  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0321 22:04:15.611389  153142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:04:15.613929  153142 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 21 21:50 /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:04:15.613950  153142 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 21 21:50 /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:04:15.613983  153142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:04:15.618052  153142 command_runner.go:130] > b5213941
	I0321 22:04:15.618245  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0321 22:04:15.625048  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10532.pem && ln -fs /usr/share/ca-certificates/10532.pem /etc/ssl/certs/10532.pem"
	I0321 22:04:15.631810  153142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10532.pem
	I0321 22:04:15.634524  153142 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 21 21:53 /usr/share/ca-certificates/10532.pem
	I0321 22:04:15.634659  153142 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 21 21:53 /usr/share/ca-certificates/10532.pem
	I0321 22:04:15.634709  153142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10532.pem
	I0321 22:04:15.638900  153142 command_runner.go:130] > 51391683
	I0321 22:04:15.639126  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10532.pem /etc/ssl/certs/51391683.0"
	I0321 22:04:15.645803  153142 kubeadm.go:401] StartCluster: {Name:multinode-860915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-860915 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 22:04:15.645921  153142 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0321 22:04:15.661796  153142 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0321 22:04:15.668233  153142 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0321 22:04:15.668259  153142 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0321 22:04:15.668272  153142 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0321 22:04:15.668868  153142 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0321 22:04:15.675639  153142 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0321 22:04:15.675681  153142 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0321 22:04:15.681936  153142 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0321 22:04:15.681960  153142 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0321 22:04:15.681972  153142 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0321 22:04:15.681984  153142 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0321 22:04:15.682030  153142 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0321 22:04:15.682066  153142 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0321 22:04:15.719562  153142 kubeadm.go:322] W0321 22:04:15.718862    1403 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0321 22:04:15.719598  153142 command_runner.go:130] ! W0321 22:04:15.718862    1403 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0321 22:04:15.758204  153142 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1030-gcp\n", err: exit status 1
	I0321 22:04:15.758235  153142 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1030-gcp\n", err: exit status 1
	I0321 22:04:15.819087  153142 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0321 22:04:15.819116  153142 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0321 22:04:28.476596  153142 kubeadm.go:322] [init] Using Kubernetes version: v1.26.2
	I0321 22:04:28.476635  153142 command_runner.go:130] > [init] Using Kubernetes version: v1.26.2
	I0321 22:04:28.476721  153142 kubeadm.go:322] [preflight] Running pre-flight checks
	I0321 22:04:28.476736  153142 command_runner.go:130] > [preflight] Running pre-flight checks
	I0321 22:04:28.476843  153142 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0321 22:04:28.476856  153142 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0321 22:04:28.476929  153142 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1030-gcp
	I0321 22:04:28.476941  153142 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1030-gcp
	I0321 22:04:28.476990  153142 kubeadm.go:322] OS: Linux
	I0321 22:04:28.477017  153142 command_runner.go:130] > OS: Linux
	I0321 22:04:28.477116  153142 kubeadm.go:322] CGROUPS_CPU: enabled
	I0321 22:04:28.477131  153142 command_runner.go:130] > CGROUPS_CPU: enabled
	I0321 22:04:28.477203  153142 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0321 22:04:28.477221  153142 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0321 22:04:28.477295  153142 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0321 22:04:28.477305  153142 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0321 22:04:28.477376  153142 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0321 22:04:28.477386  153142 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0321 22:04:28.477448  153142 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0321 22:04:28.477458  153142 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0321 22:04:28.477537  153142 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0321 22:04:28.477547  153142 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0321 22:04:28.477616  153142 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0321 22:04:28.477631  153142 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0321 22:04:28.477712  153142 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0321 22:04:28.477761  153142 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0321 22:04:28.477845  153142 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0321 22:04:28.477857  153142 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0321 22:04:28.477995  153142 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0321 22:04:28.478009  153142 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0321 22:04:28.478141  153142 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0321 22:04:28.478154  153142 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0321 22:04:28.478261  153142 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0321 22:04:28.478273  153142 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0321 22:04:28.478358  153142 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0321 22:04:28.481197  153142 out.go:204]   - Generating certificates and keys ...
	I0321 22:04:28.478432  153142 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0321 22:04:28.481346  153142 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0321 22:04:28.481368  153142 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0321 22:04:28.481443  153142 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0321 22:04:28.481452  153142 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0321 22:04:28.481512  153142 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0321 22:04:28.481518  153142 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0321 22:04:28.481561  153142 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0321 22:04:28.481565  153142 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0321 22:04:28.481634  153142 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0321 22:04:28.481647  153142 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0321 22:04:28.481702  153142 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0321 22:04:28.481710  153142 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0321 22:04:28.481774  153142 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0321 22:04:28.481782  153142 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0321 22:04:28.481931  153142 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-860915] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0321 22:04:28.481938  153142 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-860915] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0321 22:04:28.482066  153142 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0321 22:04:28.482084  153142 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0321 22:04:28.482223  153142 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-860915] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0321 22:04:28.482230  153142 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-860915] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0321 22:04:28.482308  153142 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0321 22:04:28.482315  153142 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0321 22:04:28.482392  153142 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0321 22:04:28.482397  153142 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0321 22:04:28.482445  153142 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0321 22:04:28.482451  153142 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0321 22:04:28.482518  153142 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0321 22:04:28.482535  153142 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0321 22:04:28.482592  153142 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0321 22:04:28.482599  153142 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0321 22:04:28.482661  153142 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0321 22:04:28.482667  153142 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0321 22:04:28.482752  153142 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0321 22:04:28.482759  153142 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0321 22:04:28.482825  153142 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0321 22:04:28.482834  153142 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0321 22:04:28.482947  153142 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0321 22:04:28.482954  153142 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0321 22:04:28.483023  153142 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0321 22:04:28.483027  153142 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0321 22:04:28.483057  153142 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0321 22:04:28.483060  153142 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0321 22:04:28.483114  153142 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0321 22:04:28.484745  153142 out.go:204]   - Booting up control plane ...
	I0321 22:04:28.483246  153142 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0321 22:04:28.484864  153142 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0321 22:04:28.484883  153142 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0321 22:04:28.485006  153142 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0321 22:04:28.485015  153142 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0321 22:04:28.485112  153142 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0321 22:04:28.485137  153142 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0321 22:04:28.485266  153142 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0321 22:04:28.485281  153142 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0321 22:04:28.485440  153142 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0321 22:04:28.485447  153142 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0321 22:04:28.485519  153142 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.501800 seconds
	I0321 22:04:28.485526  153142 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.501800 seconds
	I0321 22:04:28.485658  153142 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0321 22:04:28.485669  153142 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0321 22:04:28.485820  153142 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0321 22:04:28.485830  153142 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0321 22:04:28.485896  153142 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0321 22:04:28.485902  153142 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0321 22:04:28.486113  153142 command_runner.go:130] > [mark-control-plane] Marking the node multinode-860915 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0321 22:04:28.486127  153142 kubeadm.go:322] [mark-control-plane] Marking the node multinode-860915 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0321 22:04:28.486185  153142 command_runner.go:130] > [bootstrap-token] Using token: sw9hi7.obyze4s7kes6ja14
	I0321 22:04:28.486194  153142 kubeadm.go:322] [bootstrap-token] Using token: sw9hi7.obyze4s7kes6ja14
	I0321 22:04:28.487538  153142 out.go:204]   - Configuring RBAC rules ...
	I0321 22:04:28.487641  153142 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0321 22:04:28.487660  153142 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0321 22:04:28.487755  153142 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0321 22:04:28.487765  153142 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0321 22:04:28.487976  153142 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0321 22:04:28.487993  153142 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0321 22:04:28.488159  153142 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0321 22:04:28.488169  153142 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0321 22:04:28.488316  153142 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0321 22:04:28.488326  153142 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0321 22:04:28.488468  153142 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0321 22:04:28.488482  153142 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0321 22:04:28.488608  153142 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0321 22:04:28.488615  153142 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0321 22:04:28.488654  153142 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0321 22:04:28.488661  153142 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0321 22:04:28.488701  153142 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0321 22:04:28.488707  153142 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0321 22:04:28.488711  153142 kubeadm.go:322] 
	I0321 22:04:28.488762  153142 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0321 22:04:28.488768  153142 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0321 22:04:28.488772  153142 kubeadm.go:322] 
	I0321 22:04:28.488871  153142 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0321 22:04:28.488883  153142 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0321 22:04:28.488893  153142 kubeadm.go:322] 
	I0321 22:04:28.488927  153142 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0321 22:04:28.488936  153142 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0321 22:04:28.488991  153142 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0321 22:04:28.489010  153142 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0321 22:04:28.489087  153142 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0321 22:04:28.489099  153142 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0321 22:04:28.489110  153142 kubeadm.go:322] 
	I0321 22:04:28.489179  153142 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0321 22:04:28.489188  153142 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0321 22:04:28.489199  153142 kubeadm.go:322] 
	I0321 22:04:28.489263  153142 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0321 22:04:28.489271  153142 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0321 22:04:28.489277  153142 kubeadm.go:322] 
	I0321 22:04:28.489349  153142 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0321 22:04:28.489358  153142 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0321 22:04:28.489463  153142 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0321 22:04:28.489483  153142 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0321 22:04:28.489585  153142 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0321 22:04:28.489597  153142 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0321 22:04:28.489603  153142 kubeadm.go:322] 
	I0321 22:04:28.489718  153142 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0321 22:04:28.489727  153142 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0321 22:04:28.489839  153142 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0321 22:04:28.489855  153142 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0321 22:04:28.489875  153142 kubeadm.go:322] 
	I0321 22:04:28.490031  153142 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token sw9hi7.obyze4s7kes6ja14 \
	I0321 22:04:28.490054  153142 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token sw9hi7.obyze4s7kes6ja14 \
	I0321 22:04:28.490198  153142 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:4d1226b8391a06c26001ed628fb6437347623d36214654a185910576da6ae050 \
	I0321 22:04:28.490213  153142 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d1226b8391a06c26001ed628fb6437347623d36214654a185910576da6ae050 \
	I0321 22:04:28.490245  153142 command_runner.go:130] > 	--control-plane 
	I0321 22:04:28.490254  153142 kubeadm.go:322] 	--control-plane 
	I0321 22:04:28.490265  153142 kubeadm.go:322] 
	I0321 22:04:28.490398  153142 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0321 22:04:28.490412  153142 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0321 22:04:28.490423  153142 kubeadm.go:322] 
	I0321 22:04:28.490539  153142 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token sw9hi7.obyze4s7kes6ja14 \
	I0321 22:04:28.490549  153142 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token sw9hi7.obyze4s7kes6ja14 \
	I0321 22:04:28.490676  153142 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:4d1226b8391a06c26001ed628fb6437347623d36214654a185910576da6ae050 
	I0321 22:04:28.490696  153142 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:4d1226b8391a06c26001ed628fb6437347623d36214654a185910576da6ae050 
	I0321 22:04:28.490705  153142 cni.go:84] Creating CNI manager for ""
	I0321 22:04:28.490722  153142 cni.go:136] 1 nodes found, recommending kindnet
	I0321 22:04:28.492327  153142 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0321 22:04:28.493570  153142 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0321 22:04:28.497140  153142 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0321 22:04:28.497156  153142 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0321 22:04:28.497161  153142 command_runner.go:130] > Device: 36h/54d	Inode: 1320614     Links: 1
	I0321 22:04:28.497167  153142 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0321 22:04:28.497172  153142 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0321 22:04:28.497177  153142 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0321 22:04:28.497182  153142 command_runner.go:130] > Change: 2023-03-21 21:49:52.361193928 +0000
	I0321 22:04:28.497186  153142 command_runner.go:130] >  Birth: -
	I0321 22:04:28.497296  153142 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
	I0321 22:04:28.497314  153142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0321 22:04:28.510389  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0321 22:04:29.287722  153142 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0321 22:04:29.293117  153142 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0321 22:04:29.298369  153142 command_runner.go:130] > serviceaccount/kindnet created
	I0321 22:04:29.306519  153142 command_runner.go:130] > daemonset.apps/kindnet created
	I0321 22:04:29.309659  153142 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0321 22:04:29.309717  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:29.309717  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=8b6238450160ebd3d5010da9938125282f0eedd4 minikube.k8s.io/name=multinode-860915 minikube.k8s.io/updated_at=2023_03_21T22_04_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:29.317099  153142 command_runner.go:130] > -16
	I0321 22:04:29.386841  153142 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0321 22:04:29.390290  153142 ops.go:34] apiserver oom_adj: -16
	I0321 22:04:29.403137  153142 command_runner.go:130] > node/multinode-860915 labeled
	I0321 22:04:29.403149  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:29.474378  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:29.977554  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:30.038206  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:30.477845  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:30.541280  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:30.977547  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:31.036257  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:31.477325  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:31.539623  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:31.977193  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:32.035636  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:32.477758  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:32.538581  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:32.977166  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:33.035944  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:33.477928  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:33.539580  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:33.977079  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:34.037511  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:34.477080  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:34.535859  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:34.977094  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:35.037239  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:35.477274  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:35.536247  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:35.977275  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:36.036638  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:36.477555  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:36.540393  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:36.976977  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:37.037109  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:37.476969  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:37.539953  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:37.977623  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:38.039986  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:38.477623  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:38.538182  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:38.977436  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:39.040478  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:39.477083  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:39.539394  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:39.976959  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:40.037867  153142 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0321 22:04:40.477207  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0321 22:04:40.539605  153142 command_runner.go:130] > NAME      SECRETS   AGE
	I0321 22:04:40.539629  153142 command_runner.go:130] > default   0         0s
	I0321 22:04:40.542140  153142 kubeadm.go:1073] duration metric: took 11.232475575s to wait for elevateKubeSystemPrivileges.
	I0321 22:04:40.542168  153142 kubeadm.go:403] StartCluster complete in 24.89636851s
	I0321 22:04:40.542187  153142 settings.go:142] acquiring lock: {Name:mk64852ffcce32dfbe0aa61ac3d7147ea68ec4bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:40.542264  153142 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 22:04:40.543069  153142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/kubeconfig: {Name:mk5a118d4705650f833f938dc560fa34945ea156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:04:40.543271  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0321 22:04:40.543471  153142 config.go:182] Loaded profile config "multinode-860915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0321 22:04:40.543399  153142 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0321 22:04:40.543531  153142 addons.go:66] Setting storage-provisioner=true in profile "multinode-860915"
	I0321 22:04:40.543549  153142 addons.go:228] Setting addon storage-provisioner=true in "multinode-860915"
	I0321 22:04:40.543565  153142 addons.go:66] Setting default-storageclass=true in profile "multinode-860915"
	I0321 22:04:40.543582  153142 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-860915"
	I0321 22:04:40.543598  153142 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 22:04:40.543600  153142 host.go:66] Checking if "multinode-860915" exists ...
	I0321 22:04:40.543856  153142 kapi.go:59] client config for multinode-860915: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key", CAFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0321 22:04:40.543962  153142 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Status}}
	I0321 22:04:40.544170  153142 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Status}}
	I0321 22:04:40.544522  153142 cert_rotation.go:137] Starting client certificate rotation controller
	I0321 22:04:40.544771  153142 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0321 22:04:40.544788  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:40.544800  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:40.544810  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:40.554053  153142 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0321 22:04:40.554082  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:40.554092  153142 round_trippers.go:580]     Audit-Id: 94537c10-a339-43c1-9658-e21abadb1d1d
	I0321 22:04:40.554100  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:40.554109  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:40.554120  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:40.554133  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:40.554145  153142 round_trippers.go:580]     Content-Length: 291
	I0321 22:04:40.554157  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:40 GMT
	I0321 22:04:40.554185  153142 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7330eee3-984a-48af-9eda-cf12dc6be18f","resourceVersion":"230","creationTimestamp":"2023-03-21T22:04:28Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0321 22:04:40.554655  153142 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7330eee3-984a-48af-9eda-cf12dc6be18f","resourceVersion":"230","creationTimestamp":"2023-03-21T22:04:28Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0321 22:04:40.554717  153142 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0321 22:04:40.554730  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:40.554741  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:40.554753  153142 round_trippers.go:473]     Content-Type: application/json
	I0321 22:04:40.554766  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:40.560784  153142 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0321 22:04:40.560806  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:40.560816  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:40.560825  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:40.560838  153142 round_trippers.go:580]     Content-Length: 291
	I0321 22:04:40.560851  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:40 GMT
	I0321 22:04:40.560863  153142 round_trippers.go:580]     Audit-Id: 700caa40-41fd-4827-aedb-8f6680a0e182
	I0321 22:04:40.560875  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:40.560886  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:40.560915  153142 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7330eee3-984a-48af-9eda-cf12dc6be18f","resourceVersion":"302","creationTimestamp":"2023-03-21T22:04:28Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0321 22:04:40.622815  153142 command_runner.go:130] > apiVersion: v1
	I0321 22:04:40.622842  153142 command_runner.go:130] > data:
	I0321 22:04:40.622849  153142 command_runner.go:130] >   Corefile: |
	I0321 22:04:40.622856  153142 command_runner.go:130] >     .:53 {
	I0321 22:04:40.622862  153142 command_runner.go:130] >         errors
	I0321 22:04:40.622869  153142 command_runner.go:130] >         health {
	I0321 22:04:40.622877  153142 command_runner.go:130] >            lameduck 5s
	I0321 22:04:40.622883  153142 command_runner.go:130] >         }
	I0321 22:04:40.622889  153142 command_runner.go:130] >         ready
	I0321 22:04:40.622903  153142 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0321 22:04:40.622913  153142 command_runner.go:130] >            pods insecure
	I0321 22:04:40.622921  153142 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0321 22:04:40.622932  153142 command_runner.go:130] >            ttl 30
	I0321 22:04:40.622942  153142 command_runner.go:130] >         }
	I0321 22:04:40.622948  153142 command_runner.go:130] >         prometheus :9153
	I0321 22:04:40.622957  153142 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0321 22:04:40.622967  153142 command_runner.go:130] >            max_concurrent 1000
	I0321 22:04:40.622972  153142 command_runner.go:130] >         }
	I0321 22:04:40.622981  153142 command_runner.go:130] >         cache 30
	I0321 22:04:40.622993  153142 command_runner.go:130] >         loop
	I0321 22:04:40.623002  153142 command_runner.go:130] >         reload
	I0321 22:04:40.623008  153142 command_runner.go:130] >         loadbalance
	I0321 22:04:40.623016  153142 command_runner.go:130] >     }
	I0321 22:04:40.623022  153142 command_runner.go:130] > kind: ConfigMap
	I0321 22:04:40.623028  153142 command_runner.go:130] > metadata:
	I0321 22:04:40.623039  153142 command_runner.go:130] >   creationTimestamp: "2023-03-21T22:04:28Z"
	I0321 22:04:40.623048  153142 command_runner.go:130] >   name: coredns
	I0321 22:04:40.623061  153142 command_runner.go:130] >   namespace: kube-system
	I0321 22:04:40.623070  153142 command_runner.go:130] >   resourceVersion: "226"
	I0321 22:04:40.623077  153142 command_runner.go:130] >   uid: 28ab58e1-d4da-4e29-b7d3-1b4efa392be9
	I0321 22:04:40.623251  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0321 22:04:40.623966  153142 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 22:04:40.624252  153142 kapi.go:59] client config for multinode-860915: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key", CAFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0321 22:04:40.624623  153142 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0321 22:04:40.624637  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:40.624647  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:40.624657  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:40.627498  153142 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0321 22:04:40.626504  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:40.628832  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:40.628844  153142 round_trippers.go:580]     Content-Length: 109
	I0321 22:04:40.628853  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:40 GMT
	I0321 22:04:40.628875  153142 round_trippers.go:580]     Audit-Id: e6fcb386-d870-44ab-a01e-a2874f9c4158
	I0321 22:04:40.628887  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:40.628898  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:40.628907  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:40.628918  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:40.628939  153142 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"302"},"items":[]}
	I0321 22:04:40.628940  153142 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0321 22:04:40.629041  153142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0321 22:04:40.629092  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:40.629177  153142 addons.go:228] Setting addon default-storageclass=true in "multinode-860915"
	I0321 22:04:40.629216  153142 host.go:66] Checking if "multinode-860915" exists ...
	I0321 22:04:40.629579  153142 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Status}}
	I0321 22:04:40.726845  153142 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0321 22:04:40.726867  153142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0321 22:04:40.726910  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:04:40.729301  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:04:40.795872  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:04:40.881645  153142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0321 22:04:40.897232  153142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0321 22:04:40.904392  153142 command_runner.go:130] > configmap/coredns replaced
	I0321 22:04:40.904428  153142 start.go:921] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0321 22:04:41.061782  153142 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0321 22:04:41.061805  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:41.061813  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:41.061820  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:41.069905  153142 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0321 22:04:41.069952  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:41.069963  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:41.069974  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:41.069988  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:41.070003  153142 round_trippers.go:580]     Content-Length: 291
	I0321 22:04:41.070039  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:41 GMT
	I0321 22:04:41.070049  153142 round_trippers.go:580]     Audit-Id: 974bfdc4-c1e8-4e2c-8575-343407b3a2a6
	I0321 22:04:41.070081  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:41.070315  153142 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7330eee3-984a-48af-9eda-cf12dc6be18f","resourceVersion":"302","creationTimestamp":"2023-03-21T22:04:28Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0321 22:04:41.070437  153142 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-860915" context rescaled to 1 replicas
	I0321 22:04:41.070477  153142 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0321 22:04:41.072318  153142 out.go:177] * Verifying Kubernetes components...
	I0321 22:04:41.073758  153142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0321 22:04:41.414101  153142 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0321 22:04:41.414130  153142 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0321 22:04:41.414141  153142 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0321 22:04:41.414153  153142 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0321 22:04:41.414165  153142 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0321 22:04:41.414173  153142 command_runner.go:130] > pod/storage-provisioner created
	I0321 22:04:41.414226  153142 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0321 22:04:41.415734  153142 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0321 22:04:41.414770  153142 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 22:04:41.417076  153142 addons.go:499] enable addons completed in 873.682871ms: enabled=[storage-provisioner default-storageclass]
	I0321 22:04:41.417308  153142 kapi.go:59] client config for multinode-860915: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key", CAFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0321 22:04:41.417563  153142 node_ready.go:35] waiting up to 6m0s for node "multinode-860915" to be "Ready" ...
	I0321 22:04:41.417615  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:41.417623  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:41.417631  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:41.417639  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:41.419065  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:41.419084  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:41.419095  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:41.419104  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:41.419113  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:41 GMT
	I0321 22:04:41.419122  153142 round_trippers.go:580]     Audit-Id: bc468676-c445-4b0f-9b0c-16279d29ccb1
	I0321 22:04:41.419138  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:41.419145  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:41.419241  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:41.419924  153142 node_ready.go:49] node "multinode-860915" has status "Ready":"True"
	I0321 22:04:41.419939  153142 node_ready.go:38] duration metric: took 2.362614ms waiting for node "multinode-860915" to be "Ready" ...
	I0321 22:04:41.419949  153142 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0321 22:04:41.420041  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0321 22:04:41.420055  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:41.420067  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:41.420080  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:41.471249  153142 round_trippers.go:574] Response Status: 200 OK in 51 milliseconds
	I0321 22:04:41.471278  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:41.471290  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:41.471299  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:41.471312  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:41.471325  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:41.471333  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:41 GMT
	I0321 22:04:41.471347  153142 round_trippers.go:580]     Audit-Id: 4aecddd7-622c-4d24-85c1-7b842c2e77a4
	I0321 22:04:41.472300  153142 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"362"},"items":[{"metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0
ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 59099 chars]
	I0321 22:04:41.476444  153142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-69rb6" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:41.476560  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:41.476573  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:41.476584  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:41.476593  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:41.481065  153142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0321 22:04:41.481086  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:41.481097  153142 round_trippers.go:580]     Audit-Id: 835f4f99-3c7a-4581-9df7-641847d78a6c
	I0321 22:04:41.481114  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:41.481123  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:41.481134  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:41.481157  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:41.481174  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:41 GMT
	I0321 22:04:41.481300  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0321 22:04:41.481799  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:41.481817  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:41.481828  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:41.481839  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:41.483559  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:41.483581  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:41.483590  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:41.483600  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:41 GMT
	I0321 22:04:41.483614  153142 round_trippers.go:580]     Audit-Id: 5442dd18-f9f9-4ba9-85d2-4067cd867b45
	I0321 22:04:41.483631  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:41.483643  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:41.483656  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:41.483916  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:41.985065  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:41.985087  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:41.985097  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:41.985106  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:41.987508  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:41.987536  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:41.987549  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:41.987559  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:41.987568  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:41 GMT
	I0321 22:04:41.987582  153142 round_trippers.go:580]     Audit-Id: a5c17970-d993-4042-a90e-17a4a80f8b73
	I0321 22:04:41.987591  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:41.987607  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:41.987747  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0321 22:04:41.988379  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:41.988400  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:41.988412  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:41.988422  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:41.990384  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:41.990403  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:41.990413  153142 round_trippers.go:580]     Audit-Id: 3423137d-8bc4-4b92-af0b-b0699e8ebc83
	I0321 22:04:41.990422  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:41.990431  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:41.990440  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:41.990446  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:41.990452  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:41 GMT
	I0321 22:04:41.990580  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:42.484654  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:42.484741  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:42.484754  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:42.484765  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:42.487170  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:42.487197  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:42.487207  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:42 GMT
	I0321 22:04:42.487216  153142 round_trippers.go:580]     Audit-Id: 45515adb-105d-484f-a700-37e51a2f83bd
	I0321 22:04:42.487227  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:42.487242  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:42.487251  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:42.487260  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:42.487827  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0321 22:04:42.488606  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:42.488618  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:42.488628  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:42.488639  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:42.493959  153142 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0321 22:04:42.493986  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:42.493997  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:42.494006  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:42.494034  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:42 GMT
	I0321 22:04:42.494043  153142 round_trippers.go:580]     Audit-Id: cb9d1204-3f7b-4c8f-879f-c19462724a0f
	I0321 22:04:42.494052  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:42.494065  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:42.494396  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:42.984534  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:42.984570  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:42.984584  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:42.984593  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:42.987075  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:42.987103  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:42.987114  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:42.987124  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:42.987133  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:42 GMT
	I0321 22:04:42.987142  153142 round_trippers.go:580]     Audit-Id: 5bb27663-5f2f-4e2c-a7af-dd5af233cc5f
	I0321 22:04:42.987156  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:42.987166  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:42.987317  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0321 22:04:42.987988  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:42.988008  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:42.988020  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:42.988030  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:42.989827  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:42.989844  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:42.989850  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:42.989856  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:42.989861  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:42.989868  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:42 GMT
	I0321 22:04:42.989874  153142 round_trippers.go:580]     Audit-Id: cff74ebb-2dbc-417d-8458-d3b1d7fc8cd0
	I0321 22:04:42.989880  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:42.990180  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:43.485298  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:43.485322  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:43.485334  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:43.485343  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:43.487933  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:43.487957  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:43.487968  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:43.487977  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:43.487988  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:43 GMT
	I0321 22:04:43.487999  153142 round_trippers.go:580]     Audit-Id: 7fbfa791-c360-43fa-a9b1-4dff800b2c97
	I0321 22:04:43.488014  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:43.488025  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:43.488165  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0321 22:04:43.488718  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:43.488734  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:43.488745  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:43.488756  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:43.490735  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:43.490755  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:43.490764  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:43.490778  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:43 GMT
	I0321 22:04:43.490786  153142 round_trippers.go:580]     Audit-Id: af1982b1-d399-4e09-a876-6f47f9450f12
	I0321 22:04:43.490797  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:43.490807  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:43.490820  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:43.490974  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:43.491310  153142 pod_ready.go:102] pod "coredns-787d4945fb-69rb6" in "kube-system" namespace has status "Ready":"False"
	I0321 22:04:43.984844  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:43.984866  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:43.984874  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:43.984880  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:43.987069  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:43.987086  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:43.987093  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:43.987099  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:43.987105  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:43.987114  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:43 GMT
	I0321 22:04:43.987122  153142 round_trippers.go:580]     Audit-Id: 423e1679-0b17-4065-852a-68348efd8d58
	I0321 22:04:43.987137  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:43.987243  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0321 22:04:43.987733  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:43.987749  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:43.987756  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:43.987762  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:43.989553  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:43.989571  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:43.989581  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:43.989591  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:43.989600  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:43.989609  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:43 GMT
	I0321 22:04:43.989627  153142 round_trippers.go:580]     Audit-Id: 776b794c-a4c8-4429-bc17-f6e0daa2f9d5
	I0321 22:04:43.989635  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:43.989751  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:44.485414  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:44.485433  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:44.485441  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:44.485448  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:44.487822  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:44.487848  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:44.487859  153142 round_trippers.go:580]     Audit-Id: 0ea983ef-43a1-44c8-a252-49c34a03672d
	I0321 22:04:44.487867  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:44.487872  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:44.487881  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:44.487886  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:44.487894  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:44 GMT
	I0321 22:04:44.487979  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"348","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0321 22:04:44.488408  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:44.488422  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:44.488429  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:44.488435  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:44.490115  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:44.490138  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:44.490149  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:44 GMT
	I0321 22:04:44.490158  153142 round_trippers.go:580]     Audit-Id: 7d879d9a-81b6-403e-9e6f-de57a3b89a65
	I0321 22:04:44.490168  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:44.490177  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:44.490190  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:44.490202  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:44.490323  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:44.984982  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:44.985004  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:44.985015  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:44.985024  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:44.987072  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:44.987097  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:44.987108  153142 round_trippers.go:580]     Audit-Id: ce9c3f19-1b07-458a-8b8b-415b29750249
	I0321 22:04:44.987117  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:44.987125  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:44.987133  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:44.987140  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:44.987156  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:44 GMT
	I0321 22:04:44.987263  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:44.987705  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:44.987718  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:44.987725  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:44.987731  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:44.989406  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:44.989425  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:44.989435  153142 round_trippers.go:580]     Audit-Id: c86022db-5709-4d5f-b642-979a6aa6ccfe
	I0321 22:04:44.989443  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:44.989453  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:44.989466  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:44.989473  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:44.989481  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:44 GMT
	I0321 22:04:44.989563  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:45.485168  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:45.485188  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:45.485196  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:45.485203  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:45.487268  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:45.487291  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:45.487301  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:45 GMT
	I0321 22:04:45.487310  153142 round_trippers.go:580]     Audit-Id: 53a5f3cf-2c4a-4567-9fef-c1d7d48ef1af
	I0321 22:04:45.487322  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:45.487331  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:45.487343  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:45.487355  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:45.487445  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:45.487906  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:45.487918  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:45.487925  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:45.487931  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:45.489688  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:45.489704  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:45.489712  153142 round_trippers.go:580]     Audit-Id: 679af3ba-fdff-43ac-8e1f-9643de53adbd
	I0321 22:04:45.489720  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:45.489728  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:45.489737  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:45.489748  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:45.489754  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:45 GMT
	I0321 22:04:45.489878  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:45.984461  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:45.984480  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:45.984488  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:45.984498  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:45.986473  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:45.986493  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:45.986502  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:45.986510  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:45.986519  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:45.986527  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:45 GMT
	I0321 22:04:45.986538  153142 round_trippers.go:580]     Audit-Id: 1541d6db-ca52-4f82-a422-ca3d5b36e145
	I0321 22:04:45.986552  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:45.986699  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:45.987214  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:45.987229  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:45.987240  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:45.987250  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:45.988837  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:45.988858  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:45.988865  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:45.988870  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:45.988876  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:45 GMT
	I0321 22:04:45.988885  153142 round_trippers.go:580]     Audit-Id: 6a9e4627-72e7-4a03-aed4-579b657db522
	I0321 22:04:45.988894  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:45.988912  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:45.989030  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:45.989330  153142 pod_ready.go:102] pod "coredns-787d4945fb-69rb6" in "kube-system" namespace has status "Ready":"False"
	I0321 22:04:46.484547  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:46.484569  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:46.484583  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:46.484592  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:46.486663  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:46.486684  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:46.486691  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:46.486697  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:46 GMT
	I0321 22:04:46.486706  153142 round_trippers.go:580]     Audit-Id: c0df069f-6b50-489f-a263-192839f1664c
	I0321 22:04:46.486714  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:46.486728  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:46.486739  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:46.486839  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:46.487280  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:46.487294  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:46.487301  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:46.487310  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:46.488878  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:46.488899  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:46.488908  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:46.488916  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:46.488925  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:46 GMT
	I0321 22:04:46.488937  153142 round_trippers.go:580]     Audit-Id: 3e5046fe-02fc-45e8-b272-240a4c3d24b8
	I0321 22:04:46.488947  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:46.488959  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:46.489077  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:46.985269  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:46.985296  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:46.985308  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:46.985317  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:46.987574  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:46.987596  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:46.987607  153142 round_trippers.go:580]     Audit-Id: 35188258-bffd-4578-af44-aaf85090ad7c
	I0321 22:04:46.987616  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:46.987625  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:46.987634  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:46.987645  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:46.987655  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:46 GMT
	I0321 22:04:46.987787  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:46.988257  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:46.988270  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:46.988277  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:46.988284  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:46.989956  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:46.989980  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:46.989990  153142 round_trippers.go:580]     Audit-Id: 5a0d1916-53d7-4cf0-bfda-0b8235e1b36b
	I0321 22:04:46.990000  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:46.990009  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:46.990044  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:46.990058  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:46.990070  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:46 GMT
	I0321 22:04:46.990161  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:47.484717  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:47.484738  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:47.484746  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:47.484752  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:47.487128  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:47.487153  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:47.487164  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:47.487172  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:47 GMT
	I0321 22:04:47.487178  153142 round_trippers.go:580]     Audit-Id: d0d8efbb-284f-4e97-9e48-2c3fb565343a
	I0321 22:04:47.487183  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:47.487189  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:47.487196  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:47.487294  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:47.487817  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:47.487835  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:47.487846  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:47.487856  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:47.489815  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:47.489837  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:47.489844  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:47.489850  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:47 GMT
	I0321 22:04:47.489856  153142 round_trippers.go:580]     Audit-Id: 0b02e5eb-0669-4053-b1d0-3fa9aa788bfa
	I0321 22:04:47.489861  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:47.489867  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:47.489872  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:47.490049  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:47.984610  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:47.984638  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:47.984650  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:47.984660  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:47.987238  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:47.987263  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:47.987274  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:47.987284  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:47.987296  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:47.987307  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:47.987325  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:47 GMT
	I0321 22:04:47.987338  153142 round_trippers.go:580]     Audit-Id: 18e51c12-f396-4b7d-aeff-772acf914b86
	I0321 22:04:47.987465  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:47.988058  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:47.988077  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:47.988089  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:47.988100  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:47.990007  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:47.990046  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:47.990056  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:47 GMT
	I0321 22:04:47.990064  153142 round_trippers.go:580]     Audit-Id: 9dc070e3-8f3e-4e6f-8476-b77a7ca7d0da
	I0321 22:04:47.990072  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:47.990086  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:47.990095  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:47.990107  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:47.990208  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:47.990578  153142 pod_ready.go:102] pod "coredns-787d4945fb-69rb6" in "kube-system" namespace has status "Ready":"False"
	I0321 22:04:48.484479  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:48.484505  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:48.484517  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:48.484527  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:48.487382  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:48.487403  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:48.487424  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:48.487434  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:48.487446  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:48 GMT
	I0321 22:04:48.487455  153142 round_trippers.go:580]     Audit-Id: 55cbbb31-3c8b-4aa9-ba97-7551344bc56b
	I0321 22:04:48.487464  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:48.487473  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:48.487595  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:48.488145  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:48.488161  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:48.488175  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:48.488186  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:48.490165  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:48.490183  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:48.490189  153142 round_trippers.go:580]     Audit-Id: 9c37da4d-c305-465e-a8ab-80b9ca498bf9
	I0321 22:04:48.490195  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:48.490201  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:48.490209  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:48.490218  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:48.490227  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:48 GMT
	I0321 22:04:48.490361  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"309","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:48.985150  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:48.985172  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:48.985181  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:48.985187  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:48.987939  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:48.987969  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:48.987981  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:48.987991  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:48.988000  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:48.988009  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:48.988018  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:48 GMT
	I0321 22:04:48.988031  153142 round_trippers.go:580]     Audit-Id: 7efa780b-8a64-4a08-84f9-1dfa59a8c533
	I0321 22:04:48.988163  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:48.988794  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:48.988811  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:48.988822  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:48.988832  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:48.990981  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:48.991003  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:48.991012  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:48.991020  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:48 GMT
	I0321 22:04:48.991029  153142 round_trippers.go:580]     Audit-Id: c26bf01b-5c12-45fd-847b-11cb3a88329d
	I0321 22:04:48.991039  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:48.991055  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:48.991065  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:48.991197  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:49.484727  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:49.484749  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:49.484760  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:49.484772  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:49.487426  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:49.487457  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:49.487467  153142 round_trippers.go:580]     Audit-Id: 8c042c27-08c3-4471-a73e-d62bf819701c
	I0321 22:04:49.487475  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:49.487483  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:49.487491  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:49.487499  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:49.487507  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:49 GMT
	I0321 22:04:49.487664  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:49.488236  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:49.488248  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:49.488258  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:49.488267  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:49.490494  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:49.490519  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:49.490530  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:49.490540  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:49.490549  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:49.490559  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:49.490572  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:49 GMT
	I0321 22:04:49.490581  153142 round_trippers.go:580]     Audit-Id: 9338d5e3-9be2-4f3b-8ce3-fb786608a8f8
	I0321 22:04:49.490735  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:49.985216  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:49.985242  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:49.985253  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:49.985263  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:49.987930  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:49.988003  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:49.988019  153142 round_trippers.go:580]     Audit-Id: 73d1a6ac-d31e-4005-a0a3-799ab3fbdce5
	I0321 22:04:49.988029  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:49.988039  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:49.988051  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:49.988064  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:49.988074  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:49 GMT
	I0321 22:04:49.988222  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:49.988803  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:49.988819  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:49.988831  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:49.988841  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:49.990946  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:49.990970  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:49.990981  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:49.990990  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:49.991000  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:49.991012  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:49 GMT
	I0321 22:04:49.991024  153142 round_trippers.go:580]     Audit-Id: c6e8d7cc-14e9-48df-9777-23b132fa4ab3
	I0321 22:04:49.991036  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:49.991135  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:49.991543  153142 pod_ready.go:102] pod "coredns-787d4945fb-69rb6" in "kube-system" namespace has status "Ready":"False"
	I0321 22:04:50.484484  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:50.484503  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:50.484511  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:50.484518  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:50.487059  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:50.487084  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:50.487095  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:50.487104  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:50.487113  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:50.487122  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:50 GMT
	I0321 22:04:50.487136  153142 round_trippers.go:580]     Audit-Id: a2836ba2-e44d-48b8-95be-a52078ebec5b
	I0321 22:04:50.487145  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:50.487244  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:50.487702  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:50.487718  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:50.487725  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:50.487731  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:50.489972  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:50.490005  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:50.490025  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:50.490038  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:50.490058  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:50.490078  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:50 GMT
	I0321 22:04:50.490088  153142 round_trippers.go:580]     Audit-Id: 42a0a8e7-5cc8-4491-9b4d-a14368a49b14
	I0321 22:04:50.490100  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:50.490291  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:50.984504  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:50.984522  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:50.984530  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:50.984538  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:50.987243  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:50.987269  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:50.987280  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:50.987290  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:50.987299  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:50 GMT
	I0321 22:04:50.987312  153142 round_trippers.go:580]     Audit-Id: 2f355af5-66af-4449-a6a8-32e1072c7cfa
	I0321 22:04:50.987321  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:50.987330  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:50.987472  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:50.987959  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:50.987975  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:50.987982  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:50.987988  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:50.989939  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:50.989971  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:50.989983  153142 round_trippers.go:580]     Audit-Id: 63aad23f-db87-43dd-b112-c10bd8717929
	I0321 22:04:50.989993  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:50.990009  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:50.990039  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:50.990050  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:50.990062  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:50 GMT
	I0321 22:04:50.990236  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:51.485328  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:51.485348  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:51.485356  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:51.485363  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:51.490845  153142 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0321 22:04:51.490874  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:51.490885  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:51.490894  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:51.490903  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:51.490913  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:51 GMT
	I0321 22:04:51.490922  153142 round_trippers.go:580]     Audit-Id: 4a0d7665-4e97-40af-a95e-67c83acbf5ec
	I0321 22:04:51.490937  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:51.491061  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:51.491647  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:51.491677  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:51.491688  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:51.491698  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:51.493804  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:51.493824  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:51.493835  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:51.493845  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:51.493862  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:51.493871  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:51 GMT
	I0321 22:04:51.493880  153142 round_trippers.go:580]     Audit-Id: 5f400417-a130-4f19-8974-e9a8155b2f2b
	I0321 22:04:51.493891  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:51.494095  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:51.984516  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:51.984545  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:51.984562  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:51.984568  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:51.987241  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:51.987274  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:51.987286  153142 round_trippers.go:580]     Audit-Id: 85c00281-e19c-4c04-829f-af5273ac03d4
	I0321 22:04:51.987296  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:51.987309  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:51.987322  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:51.987338  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:51.987350  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:51 GMT
	I0321 22:04:51.987478  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:51.988047  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:51.988064  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:51.988075  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:51.988084  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:51.990167  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:51.990190  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:51.990202  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:51 GMT
	I0321 22:04:51.990212  153142 round_trippers.go:580]     Audit-Id: 024c87eb-02f9-4e75-945b-4da1264e3a70
	I0321 22:04:51.990221  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:51.990229  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:51.990239  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:51.990248  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:51.990426  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:52.484705  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:52.484732  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:52.484744  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:52.484754  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:52.487323  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:52.487351  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:52.487360  153142 round_trippers.go:580]     Audit-Id: 9d068a19-3ca9-4e48-a003-ad80e5e52d39
	I0321 22:04:52.487370  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:52.487379  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:52.487393  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:52.487402  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:52.487415  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:52 GMT
	I0321 22:04:52.487557  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:52.488127  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:52.488148  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:52.488159  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:52.488169  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:52.490376  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:52.490399  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:52.490409  153142 round_trippers.go:580]     Audit-Id: 75ac35a9-baa0-4566-8316-5ac22944b8d3
	I0321 22:04:52.490418  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:52.490428  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:52.490449  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:52.490462  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:52.490474  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:52 GMT
	I0321 22:04:52.490606  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:52.490985  153142 pod_ready.go:102] pod "coredns-787d4945fb-69rb6" in "kube-system" namespace has status "Ready":"False"
	I0321 22:04:52.985189  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:52.985216  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:52.985228  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:52.985239  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:52.987644  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:52.987670  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:52.987680  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:52.987689  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:52 GMT
	I0321 22:04:52.987698  153142 round_trippers.go:580]     Audit-Id: b784c229-40e8-4d5a-915b-af8bae3e795c
	I0321 22:04:52.987708  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:52.987719  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:52.987732  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:52.987872  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:52.988462  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:52.988483  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:52.988494  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:52.988504  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:52.990673  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:52.990694  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:52.990704  153142 round_trippers.go:580]     Audit-Id: 30590a51-be0a-4c1f-abfc-46a8dde6413d
	I0321 22:04:52.990714  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:52.990721  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:52.990731  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:52.990744  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:52.990756  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:52 GMT
	I0321 22:04:52.990896  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:53.484522  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:53.484541  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:53.484549  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:53.484556  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:53.487119  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:53.487144  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:53.487156  153142 round_trippers.go:580]     Audit-Id: 9bdd6514-e0de-4b23-b459-4c71be70ee81
	I0321 22:04:53.487166  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:53.487184  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:53.487197  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:53.487207  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:53.487220  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:53 GMT
	I0321 22:04:53.487339  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:53.487875  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:53.487891  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:53.487902  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:53.487912  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:53.489891  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:53.489912  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:53.489921  153142 round_trippers.go:580]     Audit-Id: 6e7602cb-ba99-4dbf-a4f9-a54d85fd99a1
	I0321 22:04:53.489930  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:53.489938  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:53.489947  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:53.489960  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:53.489973  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:53 GMT
	I0321 22:04:53.490093  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:53.985066  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:53.985093  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:53.985106  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:53.985116  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:53.987659  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:53.987689  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:53.987701  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:53.987711  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:53.987726  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:53 GMT
	I0321 22:04:53.987739  153142 round_trippers.go:580]     Audit-Id: a4f97a4e-806c-43e9-8c32-6fc34126ddec
	I0321 22:04:53.987752  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:53.987764  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:53.987893  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:53.988461  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:53.988477  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:53.988489  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:53.988503  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:53.990362  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:53.990385  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:53.990396  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:53.990405  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:53.990418  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:53.990431  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:53 GMT
	I0321 22:04:53.990447  153142 round_trippers.go:580]     Audit-Id: 1ad08d2a-8347-4673-9693-79f4c28456a9
	I0321 22:04:53.990457  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:53.990618  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:54.485231  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:54.485250  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:54.485258  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:54.485264  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:54.487711  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:54.487733  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:54.487744  153142 round_trippers.go:580]     Audit-Id: 077a7806-8ab8-484f-bcf2-3b5f72e9bb30
	I0321 22:04:54.487753  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:54.487762  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:54.487793  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:54.487807  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:54.487817  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:54 GMT
	I0321 22:04:54.487938  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:54.488489  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:54.488503  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:54.488515  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:54.488533  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:54.490447  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:54.490472  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:54.490482  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:54 GMT
	I0321 22:04:54.490490  153142 round_trippers.go:580]     Audit-Id: f0176315-dd9e-4888-a1e2-8b323df0d5fb
	I0321 22:04:54.490499  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:54.490506  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:54.490542  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:54.490557  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:54.490664  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:54.984955  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:54.984980  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:54.984992  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:54.985002  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:54.987700  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:54.987729  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:54.987739  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:54.987748  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:54.987758  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:54 GMT
	I0321 22:04:54.987767  153142 round_trippers.go:580]     Audit-Id: 30650966-37ff-4687-a55a-66178a417e62
	I0321 22:04:54.987777  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:54.987791  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:54.987919  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:54.988537  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:54.988555  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:54.988566  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:54.988576  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:54.990641  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:54.990666  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:54.990678  153142 round_trippers.go:580]     Audit-Id: 7ab860df-8080-4bd8-9716-abcb5f5a8778
	I0321 22:04:54.990688  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:54.990698  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:54.990709  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:54.990727  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:54.990740  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:54 GMT
	I0321 22:04:54.990876  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:54.991289  153142 pod_ready.go:102] pod "coredns-787d4945fb-69rb6" in "kube-system" namespace has status "Ready":"False"
	I0321 22:04:55.485479  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:55.485501  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:55.485513  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:55.485523  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:55.487972  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:55.487997  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:55.488008  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:55 GMT
	I0321 22:04:55.488018  153142 round_trippers.go:580]     Audit-Id: 33f3d116-9d25-403a-9c68-d12d49b569c4
	I0321 22:04:55.488034  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:55.488043  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:55.488053  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:55.488068  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:55.488181  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:55.488786  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:55.488806  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:55.488818  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:55.488828  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:55.490808  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:55.490836  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:55.490847  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:55.490862  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:55 GMT
	I0321 22:04:55.490876  153142 round_trippers.go:580]     Audit-Id: 40475a74-d1ed-4874-a6a5-d399a29cfab9
	I0321 22:04:55.490886  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:55.490921  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:55.490935  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:55.491068  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:55.984570  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:55.984598  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:55.984611  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:55.984621  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:55.986803  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:55.986831  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:55.986843  153142 round_trippers.go:580]     Audit-Id: 100be0d4-0e62-437e-a7bc-42c3a1f9f34b
	I0321 22:04:55.986853  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:55.986862  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:55.986871  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:55.986881  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:55.986893  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:55 GMT
	I0321 22:04:55.987019  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-69rb6","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"a381b86f-bcde-484c-878c-056280374301","resourceVersion":"394","creationTimestamp":"2023-03-21T22:04:41Z","deletionTimestamp":"2023-03-21T22:05:11Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0321 22:04:55.987573  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:55.987591  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:55.987603  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:55.987613  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:55.989411  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:55.989430  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:55.989441  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:55.989450  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:55.989458  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:55.989468  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:55.989486  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:55 GMT
	I0321 22:04:55.989500  153142 round_trippers.go:580]     Audit-Id: 6e27a4ea-aed8-4695-9bce-a7c0bbf0c912
	I0321 22:04:55.989616  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:56.485270  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-69rb6
	I0321 22:04:56.485291  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:56.485299  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:56.485306  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:56.487092  153142 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0321 22:04:56.487116  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:56.487127  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:56.487137  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:56.487146  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:56.487153  153142 round_trippers.go:580]     Content-Length: 216
	I0321 22:04:56.487160  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:56 GMT
	I0321 22:04:56.487165  153142 round_trippers.go:580]     Audit-Id: 2013cc6d-c930-4fad-9338-458e2389f31b
	I0321 22:04:56.487174  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:56.487195  153142 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-69rb6\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-69rb6","kind":"pods"},"code":404}
	I0321 22:04:56.487358  153142 pod_ready.go:97] error getting pod "coredns-787d4945fb-69rb6" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-69rb6" not found
	I0321 22:04:56.487377  153142 pod_ready.go:81] duration metric: took 15.010875895s waiting for pod "coredns-787d4945fb-69rb6" in "kube-system" namespace to be "Ready" ...
	E0321 22:04:56.487385  153142 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-69rb6" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-69rb6" not found
	I0321 22:04:56.487394  153142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-wx8p9" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:56.487434  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-wx8p9
	I0321 22:04:56.487441  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:56.487449  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:56.487455  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:56.489310  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:56.489333  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:56.489344  153142 round_trippers.go:580]     Audit-Id: 038a7ccb-6a2c-48b1-8c92-baa1b587466c
	I0321 22:04:56.489353  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:56.489363  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:56.489375  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:56.489382  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:56.489390  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:56 GMT
	I0321 22:04:56.489542  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-wx8p9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8b510dd8-761f-469a-8ccc-d08beb282e56","resourceVersion":"415","creationTimestamp":"2023-03-21T22:04:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6380 chars]
	I0321 22:04:56.489944  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:56.489955  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:56.489962  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:56.489968  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:56.491489  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:56.491505  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:56.491513  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:56.491520  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:56.491529  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:56.491541  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:56 GMT
	I0321 22:04:56.491556  153142 round_trippers.go:580]     Audit-Id: eb821716-8c1c-4419-9765-e6b994cc69ba
	I0321 22:04:56.491565  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:56.491665  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:56.992663  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-wx8p9
	I0321 22:04:56.992685  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:56.992696  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:56.992705  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:56.994798  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:56.994818  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:56.994825  153142 round_trippers.go:580]     Audit-Id: 1de2fca9-5bd7-4829-9c38-cf9c57d5e3ba
	I0321 22:04:56.994831  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:56.994837  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:56.994842  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:56.994848  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:56.994853  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:56 GMT
	I0321 22:04:56.994939  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-wx8p9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8b510dd8-761f-469a-8ccc-d08beb282e56","resourceVersion":"415","creationTimestamp":"2023-03-21T22:04:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6380 chars]
	I0321 22:04:56.995364  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:56.995377  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:56.995384  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:56.995390  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:56.997003  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:56.997030  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:56.997040  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:56.997049  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:56.997062  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:56.997071  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:56.997083  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:56 GMT
	I0321 22:04:56.997093  153142 round_trippers.go:580]     Audit-Id: a296f251-b979-45b9-baee-2bdfa9c2e650
	I0321 22:04:56.997202  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:57.492730  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-wx8p9
	I0321 22:04:57.492752  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.492765  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.492775  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.494987  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:57.495006  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.495013  153142 round_trippers.go:580]     Audit-Id: 8b043704-ab42-457f-9664-f91ff2eccbf9
	I0321 22:04:57.495019  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.495025  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.495034  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.495043  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.495056  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.495147  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-wx8p9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8b510dd8-761f-469a-8ccc-d08beb282e56","resourceVersion":"425","creationTimestamp":"2023-03-21T22:04:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0321 22:04:57.495595  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:57.495609  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.495616  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.495624  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.497423  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.497442  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.497450  153142 round_trippers.go:580]     Audit-Id: 578befb9-85b7-47dd-968b-c9e1c5120c07
	I0321 22:04:57.497459  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.497468  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.497479  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.497492  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.497506  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.497613  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:57.497894  153142 pod_ready.go:92] pod "coredns-787d4945fb-wx8p9" in "kube-system" namespace has status "Ready":"True"
	I0321 22:04:57.497916  153142 pod_ready.go:81] duration metric: took 1.010513718s waiting for pod "coredns-787d4945fb-wx8p9" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.497930  153142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.497982  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-860915
	I0321 22:04:57.497991  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.498002  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.498033  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.499716  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.499735  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.499744  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.499753  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.499762  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.499775  153142 round_trippers.go:580]     Audit-Id: f082e3ef-31c2-49ba-b3e3-82bd3a99c8a1
	I0321 22:04:57.499788  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.499804  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.499898  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-860915","namespace":"kube-system","uid":"8e7f12bc-76cf-4d36-8034-4dadd4ba1c5b","resourceVersion":"277","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"456cd41a0496a9e8bf278e639437f566","kubernetes.io/config.mirror":"456cd41a0496a9e8bf278e639437f566","kubernetes.io/config.seen":"2023-03-21T22:04:28.326473783Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0321 22:04:57.500251  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:57.500263  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.500270  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.500276  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.501635  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.501655  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.501666  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.501675  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.501681  153142 round_trippers.go:580]     Audit-Id: efdf7d07-e65e-44d9-9fc0-746dffffd179
	I0321 22:04:57.501687  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.501692  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.501698  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.501829  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:57.502151  153142 pod_ready.go:92] pod "etcd-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:04:57.502164  153142 pod_ready.go:81] duration metric: took 4.223502ms waiting for pod "etcd-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.502180  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.502232  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-860915
	I0321 22:04:57.502244  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.502257  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.502270  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.503722  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.503738  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.503748  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.503758  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.503814  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.503829  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.503841  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.503851  153142 round_trippers.go:580]     Audit-Id: e2284b40-f351-41f2-9dbc-e557243becf1
	I0321 22:04:57.503972  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-860915","namespace":"kube-system","uid":"1f990298-d202-4148-ac4a-b5f713f9fd83","resourceVersion":"274","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"322dc81533eb2822b571df496b71ca36","kubernetes.io/config.mirror":"322dc81533eb2822b571df496b71ca36","kubernetes.io/config.seen":"2023-03-21T22:04:28.326475235Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0321 22:04:57.504334  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:57.504348  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.504358  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.504378  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.505599  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.505615  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.505624  153142 round_trippers.go:580]     Audit-Id: e91ec303-f5ce-4b8e-97fc-50c8112c5014
	I0321 22:04:57.505633  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.505641  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.505650  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.505662  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.505673  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.505733  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:57.506052  153142 pod_ready.go:92] pod "kube-apiserver-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:04:57.506064  153142 pod_ready.go:81] duration metric: took 3.875533ms waiting for pod "kube-apiserver-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.506074  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.506120  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-860915
	I0321 22:04:57.506130  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.506141  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.506153  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.507480  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.507494  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.507500  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.507506  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.507511  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.507517  153142 round_trippers.go:580]     Audit-Id: fa32c58d-12fb-41c4-9acb-6e0923b5488d
	I0321 22:04:57.507523  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.507532  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.507618  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-860915","namespace":"kube-system","uid":"6c6a8cd8-e27e-40e9-910f-c3d9b56c6882","resourceVersion":"391","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70cebb20d8a8aec3d68c354124b20828","kubernetes.io/config.mirror":"70cebb20d8a8aec3d68c354124b20828","kubernetes.io/config.seen":"2023-03-21T22:04:28.326453711Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0321 22:04:57.507960  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:57.507973  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.507980  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.507987  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.509186  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.509200  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.509207  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.509213  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.509222  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.509233  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.509244  153142 round_trippers.go:580]     Audit-Id: b2fceeee-3672-44ad-9485-59de1d5be06c
	I0321 22:04:57.509250  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.509329  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:57.509555  153142 pod_ready.go:92] pod "kube-controller-manager-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:04:57.509564  153142 pod_ready.go:81] duration metric: took 3.484348ms waiting for pod "kube-controller-manager-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.509573  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.509603  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97hnd
	I0321 22:04:57.509610  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.509618  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.509624  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.510949  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.510968  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.510978  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.510987  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.510995  153142 round_trippers.go:580]     Audit-Id: 10044fd8-5914-4468-b352-0eff97ad19b9
	I0321 22:04:57.511011  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.511020  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.511032  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.511129  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-97hnd","generateName":"kube-proxy-","namespace":"kube-system","uid":"a92d55d8-3ec3-4e8e-b31c-f24fcb440600","resourceVersion":"382","creationTimestamp":"2023-03-21T22:04:40Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cda4f06c-b817-4058-b4d0-e429752e2f27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cda4f06c-b817-4058-b4d0-e429752e2f27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0321 22:04:57.511477  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:57.511489  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.511496  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.511504  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.512720  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:04:57.512734  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.512741  153142 round_trippers.go:580]     Audit-Id: 62739c95-836d-4760-b04b-48f43d2bcd47
	I0321 22:04:57.512746  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.512752  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.512757  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.512762  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.512770  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.512850  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:57.513078  153142 pod_ready.go:92] pod "kube-proxy-97hnd" in "kube-system" namespace has status "Ready":"True"
	I0321 22:04:57.513088  153142 pod_ready.go:81] duration metric: took 3.510613ms waiting for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.513096  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.693467  153142 request.go:622] Waited for 180.321576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-860915
	I0321 22:04:57.693532  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-860915
	I0321 22:04:57.693539  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.693551  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.693565  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.695637  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:57.695657  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.695665  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.695670  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.695676  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.695682  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.695687  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.695694  153142 round_trippers.go:580]     Audit-Id: a1806582-30d9-4108-8e95-686618358d66
	I0321 22:04:57.695877  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-860915","namespace":"kube-system","uid":"1a170ba9-55b2-4275-be35-718bde52ddc2","resourceVersion":"272","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"689743ba99dabd4496ce52934787709c","kubernetes.io/config.mirror":"689743ba99dabd4496ce52934787709c","kubernetes.io/config.seen":"2023-03-21T22:04:28.326472146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0321 22:04:57.893603  153142 request.go:622] Waited for 197.372537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:57.893653  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:04:57.893657  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.893665  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.893671  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.895717  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:57.895734  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.895740  153142 round_trippers.go:580]     Audit-Id: 4aa6391f-9e9f-4a34-b41a-59611fddc63a
	I0321 22:04:57.895746  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.895751  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.895756  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.895762  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.895767  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.895875  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0321 22:04:57.896145  153142 pod_ready.go:92] pod "kube-scheduler-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:04:57.896156  153142 pod_ready.go:81] duration metric: took 383.055576ms waiting for pod "kube-scheduler-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:04:57.896169  153142 pod_ready.go:38] duration metric: took 16.476205427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0321 22:04:57.896193  153142 api_server.go:51] waiting for apiserver process to appear ...
	I0321 22:04:57.896234  153142 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:04:57.905449  153142 command_runner.go:130] > 2095
	I0321 22:04:57.906179  153142 api_server.go:71] duration metric: took 16.835670344s to wait for apiserver process to appear ...
	I0321 22:04:57.906198  153142 api_server.go:87] waiting for apiserver healthz status ...
	I0321 22:04:57.906208  153142 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0321 22:04:57.910085  153142 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0321 22:04:57.910130  153142 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0321 22:04:57.910140  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:57.910149  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:57.910156  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:57.910809  153142 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0321 22:04:57.910823  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:57.910830  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:57.910835  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:57.910842  153142 round_trippers.go:580]     Content-Length: 263
	I0321 22:04:57.910847  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:57 GMT
	I0321 22:04:57.910853  153142 round_trippers.go:580]     Audit-Id: 5240da1a-0fad-4aab-a6bb-b35fc4eed4dd
	I0321 22:04:57.910858  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:57.910863  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:57.910879  153142 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.2",
	  "gitCommit": "fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b",
	  "gitTreeState": "clean",
	  "buildDate": "2023-02-22T13:32:22Z",
	  "goVersion": "go1.19.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0321 22:04:57.910939  153142 api_server.go:140] control plane version: v1.26.2
	I0321 22:04:57.910951  153142 api_server.go:130] duration metric: took 4.748684ms to wait for apiserver health ...
	I0321 22:04:57.910957  153142 system_pods.go:43] waiting for kube-system pods to appear ...
	I0321 22:04:58.093331  153142 request.go:622] Waited for 182.317735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0321 22:04:58.093377  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0321 22:04:58.093382  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:58.093390  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:58.093397  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:58.096514  153142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0321 22:04:58.096546  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:58.096558  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:58.096568  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:58.096577  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:58 GMT
	I0321 22:04:58.096589  153142 round_trippers.go:580]     Audit-Id: f334a6ef-589e-49c3-84c7-5c448cd0a57a
	I0321 22:04:58.096601  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:58.096614  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:58.097037  153142 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-787d4945fb-wx8p9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8b510dd8-761f-469a-8ccc-d08beb282e56","resourceVersion":"425","creationTimestamp":"2023-03-21T22:04:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0321 22:04:58.098717  153142 system_pods.go:59] 8 kube-system pods found
	I0321 22:04:58.098738  153142 system_pods.go:61] "coredns-787d4945fb-wx8p9" [8b510dd8-761f-469a-8ccc-d08beb282e56] Running
	I0321 22:04:58.098745  153142 system_pods.go:61] "etcd-multinode-860915" [8e7f12bc-76cf-4d36-8034-4dadd4ba1c5b] Running
	I0321 22:04:58.098750  153142 system_pods.go:61] "kindnet-wnjrv" [2a3b424c-5776-46cc-8cce-675ab8d20f34] Running
	I0321 22:04:58.098757  153142 system_pods.go:61] "kube-apiserver-multinode-860915" [1f990298-d202-4148-ac4a-b5f713f9fd83] Running
	I0321 22:04:58.098762  153142 system_pods.go:61] "kube-controller-manager-multinode-860915" [6c6a8cd8-e27e-40e9-910f-c3d9b56c6882] Running
	I0321 22:04:58.098768  153142 system_pods.go:61] "kube-proxy-97hnd" [a92d55d8-3ec3-4e8e-b31c-f24fcb440600] Running
	I0321 22:04:58.098772  153142 system_pods.go:61] "kube-scheduler-multinode-860915" [1a170ba9-55b2-4275-be35-718bde52ddc2] Running
	I0321 22:04:58.098779  153142 system_pods.go:61] "storage-provisioner" [07f8352f-22bf-4948-aff5-af3a33cfb84e] Running
	I0321 22:04:58.098784  153142 system_pods.go:74] duration metric: took 187.822987ms to wait for pod list to return data ...
	I0321 22:04:58.098794  153142 default_sa.go:34] waiting for default service account to be created ...
	I0321 22:04:58.293201  153142 request.go:622] Waited for 194.331163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0321 22:04:58.293247  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0321 22:04:58.293260  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:58.293267  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:58.293277  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:58.295398  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:58.295416  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:58.295423  153142 round_trippers.go:580]     Content-Length: 261
	I0321 22:04:58.295429  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:58 GMT
	I0321 22:04:58.295435  153142 round_trippers.go:580]     Audit-Id: aa9d2012-2c1b-4f10-9c7a-1a3ffb9db0f4
	I0321 22:04:58.295441  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:58.295446  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:58.295452  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:58.295461  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:58.295479  153142 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b8e9d182-d215-4cde-9add-038eb5f0ad0b","resourceVersion":"301","creationTimestamp":"2023-03-21T22:04:40Z"}}]}
	I0321 22:04:58.295650  153142 default_sa.go:45] found service account: "default"
	I0321 22:04:58.295663  153142 default_sa.go:55] duration metric: took 196.856404ms for default service account to be created ...
	I0321 22:04:58.295670  153142 system_pods.go:116] waiting for k8s-apps to be running ...
	I0321 22:04:58.493081  153142 request.go:622] Waited for 197.353191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0321 22:04:58.493140  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0321 22:04:58.493145  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:58.493153  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:58.493161  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:58.496196  153142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0321 22:04:58.496223  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:58.496234  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:58.496244  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:58 GMT
	I0321 22:04:58.496251  153142 round_trippers.go:580]     Audit-Id: b42c8d62-89ce-463d-b67b-970e7798183d
	I0321 22:04:58.496265  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:58.496278  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:58.496287  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:58.496663  153142 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-787d4945fb-wx8p9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8b510dd8-761f-469a-8ccc-d08beb282e56","resourceVersion":"425","creationTimestamp":"2023-03-21T22:04:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0321 22:04:58.498371  153142 system_pods.go:86] 8 kube-system pods found
	I0321 22:04:58.498392  153142 system_pods.go:89] "coredns-787d4945fb-wx8p9" [8b510dd8-761f-469a-8ccc-d08beb282e56] Running
	I0321 22:04:58.498401  153142 system_pods.go:89] "etcd-multinode-860915" [8e7f12bc-76cf-4d36-8034-4dadd4ba1c5b] Running
	I0321 22:04:58.498411  153142 system_pods.go:89] "kindnet-wnjrv" [2a3b424c-5776-46cc-8cce-675ab8d20f34] Running
	I0321 22:04:58.498421  153142 system_pods.go:89] "kube-apiserver-multinode-860915" [1f990298-d202-4148-ac4a-b5f713f9fd83] Running
	I0321 22:04:58.498436  153142 system_pods.go:89] "kube-controller-manager-multinode-860915" [6c6a8cd8-e27e-40e9-910f-c3d9b56c6882] Running
	I0321 22:04:58.498443  153142 system_pods.go:89] "kube-proxy-97hnd" [a92d55d8-3ec3-4e8e-b31c-f24fcb440600] Running
	I0321 22:04:58.498453  153142 system_pods.go:89] "kube-scheduler-multinode-860915" [1a170ba9-55b2-4275-be35-718bde52ddc2] Running
	I0321 22:04:58.498462  153142 system_pods.go:89] "storage-provisioner" [07f8352f-22bf-4948-aff5-af3a33cfb84e] Running
	I0321 22:04:58.498475  153142 system_pods.go:126] duration metric: took 202.799516ms to wait for k8s-apps to be running ...
	I0321 22:04:58.498485  153142 system_svc.go:44] waiting for kubelet service to be running ....
	I0321 22:04:58.498532  153142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0321 22:04:58.507964  153142 system_svc.go:56] duration metric: took 9.473324ms WaitForService to wait for kubelet.
	I0321 22:04:58.507990  153142 kubeadm.go:578] duration metric: took 17.437477807s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0321 22:04:58.508022  153142 node_conditions.go:102] verifying NodePressure condition ...
	I0321 22:04:58.693460  153142 request.go:622] Waited for 185.34951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0321 22:04:58.693506  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0321 22:04:58.693511  153142 round_trippers.go:469] Request Headers:
	I0321 22:04:58.693518  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:04:58.693525  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:04:58.695752  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:04:58.695771  153142 round_trippers.go:577] Response Headers:
	I0321 22:04:58.695778  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:04:58.695784  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:04:58.695789  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:04:58.695796  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:04:58.695806  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:04:58 GMT
	I0321 22:04:58.695814  153142 round_trippers.go:580]     Audit-Id: 49dbfe2d-7e76-40b6-943c-5da452a35ee6
	I0321 22:04:58.695977  153142 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"402","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5052 chars]
	I0321 22:04:58.696326  153142 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0321 22:04:58.696353  153142 node_conditions.go:123] node cpu capacity is 8
	I0321 22:04:58.696367  153142 node_conditions.go:105] duration metric: took 188.339387ms to run NodePressure ...
	I0321 22:04:58.696377  153142 start.go:228] waiting for startup goroutines ...
	I0321 22:04:58.696385  153142 start.go:233] waiting for cluster config update ...
	I0321 22:04:58.696397  153142 start.go:242] writing updated cluster config ...
	I0321 22:04:58.699416  153142 out.go:177] 
	I0321 22:04:58.701316  153142 config.go:182] Loaded profile config "multinode-860915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0321 22:04:58.701389  153142 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/config.json ...
	I0321 22:04:58.703369  153142 out.go:177] * Starting worker node multinode-860915-m02 in cluster multinode-860915
	I0321 22:04:58.704754  153142 cache.go:120] Beginning downloading kic base image for docker with docker
	I0321 22:04:58.706275  153142 out.go:177] * Pulling base image ...
	I0321 22:04:58.708101  153142 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0321 22:04:58.708125  153142 cache.go:57] Caching tarball of preloaded images
	I0321 22:04:58.708125  153142 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0321 22:04:58.708217  153142 preload.go:174] Found /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0321 22:04:58.708234  153142 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0321 22:04:58.708340  153142 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/config.json ...
	I0321 22:04:58.772762  153142 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0321 22:04:58.772785  153142 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0321 22:04:58.772803  153142 cache.go:193] Successfully downloaded all kic artifacts
	I0321 22:04:58.772829  153142 start.go:364] acquiring machines lock for multinode-860915-m02: {Name:mk031987672620a4f648b7cea3a75ff5f4c6353f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0321 22:04:58.772925  153142 start.go:368] acquired machines lock for "multinode-860915-m02" in 77.142µs
	I0321 22:04:58.772950  153142 start.go:93] Provisioning new machine with config: &{Name:multinode-860915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-860915 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0321 22:04:58.773036  153142 start.go:125] createHost starting for "m02" (driver="docker")
	I0321 22:04:58.775364  153142 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0321 22:04:58.775463  153142 start.go:159] libmachine.API.Create for "multinode-860915" (driver="docker")
	I0321 22:04:58.775486  153142 client.go:168] LocalClient.Create starting
	I0321 22:04:58.775556  153142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem
	I0321 22:04:58.775587  153142 main.go:141] libmachine: Decoding PEM data...
	I0321 22:04:58.775602  153142 main.go:141] libmachine: Parsing certificate...
	I0321 22:04:58.775653  153142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem
	I0321 22:04:58.775671  153142 main.go:141] libmachine: Decoding PEM data...
	I0321 22:04:58.775682  153142 main.go:141] libmachine: Parsing certificate...
	I0321 22:04:58.775867  153142 cli_runner.go:164] Run: docker network inspect multinode-860915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0321 22:04:58.840933  153142 network_create.go:76] Found existing network {name:multinode-860915 subnet:0xc001946000 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0321 22:04:58.840972  153142 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-860915-m02" container
	I0321 22:04:58.841041  153142 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0321 22:04:58.908626  153142 cli_runner.go:164] Run: docker volume create multinode-860915-m02 --label name.minikube.sigs.k8s.io=multinode-860915-m02 --label created_by.minikube.sigs.k8s.io=true
	I0321 22:04:58.975214  153142 oci.go:103] Successfully created a docker volume multinode-860915-m02
	I0321 22:04:58.975288  153142 cli_runner.go:164] Run: docker run --rm --name multinode-860915-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-860915-m02 --entrypoint /usr/bin/test -v multinode-860915-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
	I0321 22:04:59.586775  153142 oci.go:107] Successfully prepared a docker volume multinode-860915-m02
	I0321 22:04:59.586814  153142 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0321 22:04:59.586837  153142 kic.go:190] Starting extracting preloaded images to volume ...
	I0321 22:04:59.586897  153142 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-860915-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
	I0321 22:05:04.543271  153142 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-860915-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir: (4.956327495s)
	I0321 22:05:04.543300  153142 kic.go:199] duration metric: took 4.956459 seconds to extract preloaded images to volume
	W0321 22:05:04.543411  153142 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0321 22:05:04.543487  153142 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0321 22:05:04.661899  153142 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-860915-m02 --name multinode-860915-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-860915-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-860915-m02 --network multinode-860915 --ip 192.168.58.3 --volume multinode-860915-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978
	I0321 22:05:05.070332  153142 cli_runner.go:164] Run: docker container inspect multinode-860915-m02 --format={{.State.Running}}
	I0321 22:05:05.136415  153142 cli_runner.go:164] Run: docker container inspect multinode-860915-m02 --format={{.State.Status}}
	I0321 22:05:05.204463  153142 cli_runner.go:164] Run: docker exec multinode-860915-m02 stat /var/lib/dpkg/alternatives/iptables
	I0321 22:05:05.320439  153142 oci.go:144] the created container "multinode-860915-m02" has a running status.
	I0321 22:05:05.320471  153142 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa...
	I0321 22:05:05.489497  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0321 22:05:05.489545  153142 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0321 22:05:05.606050  153142 cli_runner.go:164] Run: docker container inspect multinode-860915-m02 --format={{.State.Status}}
	I0321 22:05:05.674950  153142 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0321 22:05:05.674968  153142 kic_runner.go:114] Args: [docker exec --privileged multinode-860915-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0321 22:05:05.793797  153142 cli_runner.go:164] Run: docker container inspect multinode-860915-m02 --format={{.State.Status}}
	I0321 22:05:05.856138  153142 machine.go:88] provisioning docker machine ...
	I0321 22:05:05.856175  153142 ubuntu.go:169] provisioning hostname "multinode-860915-m02"
	I0321 22:05:05.856226  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:05.919039  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:05:05.919458  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0321 22:05:05.919475  153142 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-860915-m02 && echo "multinode-860915-m02" | sudo tee /etc/hostname
	I0321 22:05:06.041515  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-860915-m02
	
	I0321 22:05:06.041592  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:06.104393  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:05:06.104852  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0321 22:05:06.104882  153142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-860915-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-860915-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-860915-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0321 22:05:06.217420  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0321 22:05:06.217449  153142 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16124-3841/.minikube CaCertPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16124-3841/.minikube}
	I0321 22:05:06.217468  153142 ubuntu.go:177] setting up certificates
	I0321 22:05:06.217477  153142 provision.go:83] configureAuth start
	I0321 22:05:06.217521  153142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860915-m02
	I0321 22:05:06.279578  153142 provision.go:138] copyHostCerts
	I0321 22:05:06.279624  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem
	I0321 22:05:06.279666  153142 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem, removing ...
	I0321 22:05:06.279677  153142 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem
	I0321 22:05:06.279756  153142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16124-3841/.minikube/ca.pem (1082 bytes)
	I0321 22:05:06.279826  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem
	I0321 22:05:06.279846  153142 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem, removing ...
	I0321 22:05:06.279850  153142 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem
	I0321 22:05:06.279880  153142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16124-3841/.minikube/cert.pem (1123 bytes)
	I0321 22:05:06.279942  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem
	I0321 22:05:06.279965  153142 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem, removing ...
	I0321 22:05:06.279970  153142 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem
	I0321 22:05:06.279999  153142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16124-3841/.minikube/key.pem (1675 bytes)
	I0321 22:05:06.280940  153142 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca-key.pem org=jenkins.multinode-860915-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-860915-m02]
	I0321 22:05:06.424772  153142 provision.go:172] copyRemoteCerts
	I0321 22:05:06.424824  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0321 22:05:06.424855  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:06.486712  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa Username:docker}
	I0321 22:05:06.568628  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0321 22:05:06.568684  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0321 22:05:06.585328  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0321 22:05:06.585397  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0321 22:05:06.602009  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0321 22:05:06.602086  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0321 22:05:06.618149  153142 provision.go:86] duration metric: configureAuth took 400.663052ms
	I0321 22:05:06.618171  153142 ubuntu.go:193] setting minikube options for container-runtime
	I0321 22:05:06.618333  153142 config.go:182] Loaded profile config "multinode-860915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0321 22:05:06.618391  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:06.681414  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:05:06.681833  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0321 22:05:06.681851  153142 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0321 22:05:06.793802  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0321 22:05:06.793841  153142 ubuntu.go:71] root file system type: overlay
	I0321 22:05:06.793960  153142 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0321 22:05:06.794007  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:06.857202  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:05:06.857616  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0321 22:05:06.857675  153142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0321 22:05:06.977731  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0321 22:05:06.977794  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:07.038293  153142 main.go:141] libmachine: Using SSH client type: native
	I0321 22:05:07.038752  153142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0321 22:05:07.038777  153142 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0321 22:05:07.653684  153142 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-21 22:05:06.973195044 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0321 22:05:07.653720  153142 machine.go:91] provisioned docker machine in 1.79755383s
	I0321 22:05:07.653734  153142 client.go:171] LocalClient.Create took 8.878240806s
	I0321 22:05:07.653758  153142 start.go:167] duration metric: libmachine.API.Create for "multinode-860915" took 8.878293644s
	I0321 22:05:07.653771  153142 start.go:300] post-start starting for "multinode-860915-m02" (driver="docker")
	I0321 22:05:07.653785  153142 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0321 22:05:07.653849  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0321 22:05:07.653910  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:07.719963  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa Username:docker}
	I0321 22:05:07.805010  153142 ssh_runner.go:195] Run: cat /etc/os-release
	I0321 22:05:07.807384  153142 command_runner.go:130] > NAME="Ubuntu"
	I0321 22:05:07.807404  153142 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0321 22:05:07.807411  153142 command_runner.go:130] > ID=ubuntu
	I0321 22:05:07.807419  153142 command_runner.go:130] > ID_LIKE=debian
	I0321 22:05:07.807427  153142 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0321 22:05:07.807432  153142 command_runner.go:130] > VERSION_ID="20.04"
	I0321 22:05:07.807437  153142 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0321 22:05:07.807445  153142 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0321 22:05:07.807450  153142 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0321 22:05:07.807460  153142 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0321 22:05:07.807465  153142 command_runner.go:130] > VERSION_CODENAME=focal
	I0321 22:05:07.807470  153142 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0321 22:05:07.807541  153142 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0321 22:05:07.807557  153142 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0321 22:05:07.807566  153142 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0321 22:05:07.807576  153142 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0321 22:05:07.807590  153142 filesync.go:126] Scanning /home/jenkins/minikube-integration/16124-3841/.minikube/addons for local assets ...
	I0321 22:05:07.807643  153142 filesync.go:126] Scanning /home/jenkins/minikube-integration/16124-3841/.minikube/files for local assets ...
	I0321 22:05:07.807731  153142 filesync.go:149] local asset: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem -> 105322.pem in /etc/ssl/certs
	I0321 22:05:07.807742  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem -> /etc/ssl/certs/105322.pem
	I0321 22:05:07.807845  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0321 22:05:07.813962  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem --> /etc/ssl/certs/105322.pem (1708 bytes)
	I0321 22:05:07.829637  153142 start.go:303] post-start completed in 175.84866ms
	I0321 22:05:07.829954  153142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860915-m02
	I0321 22:05:07.893904  153142 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/config.json ...
	I0321 22:05:07.894214  153142 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0321 22:05:07.894268  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:07.955636  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa Username:docker}
	I0321 22:05:08.038242  153142 command_runner.go:130] > 17%!
	(MISSING)I0321 22:05:08.038313  153142 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0321 22:05:08.041694  153142 command_runner.go:130] > 242G
	I0321 22:05:08.041797  153142 start.go:128] duration metric: createHost completed in 9.268749395s
	I0321 22:05:08.041820  153142 start.go:83] releasing machines lock for "multinode-860915-m02", held for 9.268883029s
	I0321 22:05:08.041883  153142 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860915-m02
	I0321 22:05:08.106692  153142 out.go:177] * Found network options:
	I0321 22:05:08.108261  153142 out.go:177]   - NO_PROXY=192.168.58.2
	W0321 22:05:08.109568  153142 proxy.go:119] fail to check proxy env: Error ip not in block
	W0321 22:05:08.109612  153142 proxy.go:119] fail to check proxy env: Error ip not in block
	I0321 22:05:08.109685  153142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0321 22:05:08.109729  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:08.109731  153142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0321 22:05:08.109775  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:05:08.179407  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa Username:docker}
	I0321 22:05:08.183357  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa Username:docker}
	I0321 22:05:08.291318  153142 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0321 22:05:08.291384  153142 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0321 22:05:08.291402  153142 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0321 22:05:08.291411  153142 command_runner.go:130] > Device: c5h/197d	Inode: 1322525     Links: 1
	I0321 22:05:08.291420  153142 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0321 22:05:08.291431  153142 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0321 22:05:08.291442  153142 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0321 22:05:08.291452  153142 command_runner.go:130] > Change: 2023-03-21 21:49:53.137271995 +0000
	I0321 22:05:08.291463  153142 command_runner.go:130] >  Birth: -
	I0321 22:05:08.291523  153142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0321 22:05:08.311114  153142 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0321 22:05:08.311171  153142 ssh_runner.go:195] Run: which cri-dockerd
	I0321 22:05:08.313632  153142 command_runner.go:130] > /usr/bin/cri-dockerd
	I0321 22:05:08.313840  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0321 22:05:08.319968  153142 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0321 22:05:08.331810  153142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0321 22:05:08.345433  153142 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0321 22:05:08.345474  153142 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0321 22:05:08.345487  153142 start.go:485] detecting cgroup driver to use...
	I0321 22:05:08.345518  153142 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0321 22:05:08.345606  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0321 22:05:08.356340  153142 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0321 22:05:08.357014  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0321 22:05:08.364057  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0321 22:05:08.370943  153142 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0321 22:05:08.370991  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0321 22:05:08.377859  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0321 22:05:08.384998  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0321 22:05:08.392019  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0321 22:05:08.398930  153142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0321 22:05:08.405637  153142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0321 22:05:08.412550  153142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0321 22:05:08.418304  153142 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0321 22:05:08.418353  153142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0321 22:05:08.423937  153142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0321 22:05:08.498504  153142 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0321 22:05:08.576442  153142 start.go:485] detecting cgroup driver to use...
	I0321 22:05:08.576497  153142 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0321 22:05:08.576544  153142 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0321 22:05:08.585171  153142 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0321 22:05:08.585189  153142 command_runner.go:130] > [Unit]
	I0321 22:05:08.585197  153142 command_runner.go:130] > Description=Docker Application Container Engine
	I0321 22:05:08.585206  153142 command_runner.go:130] > Documentation=https://docs.docker.com
	I0321 22:05:08.585212  153142 command_runner.go:130] > BindsTo=containerd.service
	I0321 22:05:08.585221  153142 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0321 22:05:08.585228  153142 command_runner.go:130] > Wants=network-online.target
	I0321 22:05:08.585238  153142 command_runner.go:130] > Requires=docker.socket
	I0321 22:05:08.585243  153142 command_runner.go:130] > StartLimitBurst=3
	I0321 22:05:08.585247  153142 command_runner.go:130] > StartLimitIntervalSec=60
	I0321 22:05:08.585251  153142 command_runner.go:130] > [Service]
	I0321 22:05:08.585255  153142 command_runner.go:130] > Type=notify
	I0321 22:05:08.585258  153142 command_runner.go:130] > Restart=on-failure
	I0321 22:05:08.585262  153142 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0321 22:05:08.585270  153142 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0321 22:05:08.585288  153142 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0321 22:05:08.585301  153142 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0321 22:05:08.585317  153142 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0321 22:05:08.585329  153142 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0321 22:05:08.585338  153142 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0321 22:05:08.585349  153142 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0321 22:05:08.585366  153142 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0321 22:05:08.585381  153142 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0321 22:05:08.585391  153142 command_runner.go:130] > ExecStart=
	I0321 22:05:08.585412  153142 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0321 22:05:08.585428  153142 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0321 22:05:08.585434  153142 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0321 22:05:08.585440  153142 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0321 22:05:08.585445  153142 command_runner.go:130] > LimitNOFILE=infinity
	I0321 22:05:08.585449  153142 command_runner.go:130] > LimitNPROC=infinity
	I0321 22:05:08.585453  153142 command_runner.go:130] > LimitCORE=infinity
	I0321 22:05:08.585462  153142 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0321 22:05:08.585467  153142 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0321 22:05:08.585475  153142 command_runner.go:130] > TasksMax=infinity
	I0321 22:05:08.585479  153142 command_runner.go:130] > TimeoutStartSec=0
	I0321 22:05:08.585487  153142 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0321 22:05:08.585491  153142 command_runner.go:130] > Delegate=yes
	I0321 22:05:08.585501  153142 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0321 22:05:08.585505  153142 command_runner.go:130] > KillMode=process
	I0321 22:05:08.585509  153142 command_runner.go:130] > [Install]
	I0321 22:05:08.585513  153142 command_runner.go:130] > WantedBy=multi-user.target
	I0321 22:05:08.585992  153142 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0321 22:05:08.586080  153142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0321 22:05:08.595607  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0321 22:05:08.608091  153142 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0321 22:05:08.609382  153142 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0321 22:05:08.715913  153142 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0321 22:05:08.810986  153142 docker.go:531] configuring docker to use "cgroupfs" as cgroup driver...
	I0321 22:05:08.811029  153142 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0321 22:05:08.824097  153142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0321 22:05:08.907829  153142 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0321 22:05:09.104797  153142 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0321 22:05:09.177286  153142 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0321 22:05:09.177361  153142 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0321 22:05:09.246711  153142 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0321 22:05:09.328389  153142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0321 22:05:09.404764  153142 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0321 22:05:09.415005  153142 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0321 22:05:09.415056  153142 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0321 22:05:09.417745  153142 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0321 22:05:09.417760  153142 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0321 22:05:09.417766  153142 command_runner.go:130] > Device: d3h/211d	Inode: 206         Links: 1
	I0321 22:05:09.417773  153142 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0321 22:05:09.417778  153142 command_runner.go:130] > Access: 2023-03-21 22:05:09.409440070 +0000
	I0321 22:05:09.417783  153142 command_runner.go:130] > Modify: 2023-03-21 22:05:09.409440070 +0000
	I0321 22:05:09.417789  153142 command_runner.go:130] > Change: 2023-03-21 22:05:09.409440070 +0000
	I0321 22:05:09.417793  153142 command_runner.go:130] >  Birth: -
	I0321 22:05:09.417871  153142 start.go:553] Will wait 60s for crictl version
	I0321 22:05:09.417906  153142 ssh_runner.go:195] Run: which crictl
	I0321 22:05:09.420246  153142 command_runner.go:130] > /usr/bin/crictl
	I0321 22:05:09.420368  153142 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0321 22:05:09.493861  153142 command_runner.go:130] > Version:  0.1.0
	I0321 22:05:09.493885  153142 command_runner.go:130] > RuntimeName:  docker
	I0321 22:05:09.493889  153142 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0321 22:05:09.493894  153142 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0321 22:05:09.493909  153142 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0321 22:05:09.493956  153142 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0321 22:05:09.515851  153142 command_runner.go:130] > 23.0.1
	I0321 22:05:09.515914  153142 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0321 22:05:09.534342  153142 command_runner.go:130] > 23.0.1
	I0321 22:05:09.537217  153142 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
	I0321 22:05:09.538644  153142 out.go:177]   - env NO_PROXY=192.168.58.2
	I0321 22:05:09.540044  153142 cli_runner.go:164] Run: docker network inspect multinode-860915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0321 22:05:09.603538  153142 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0321 22:05:09.606651  153142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0321 22:05:09.615314  153142 certs.go:56] Setting up /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915 for IP: 192.168.58.3
	I0321 22:05:09.615337  153142 certs.go:186] acquiring lock for shared ca certs: {Name:mke51456f2089c678c8a8085b7dd3883448bd6b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:05:09.615461  153142 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.key
	I0321 22:05:09.615509  153142 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.key
	I0321 22:05:09.615526  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0321 22:05:09.615538  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0321 22:05:09.615550  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0321 22:05:09.615561  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0321 22:05:09.615619  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532.pem (1338 bytes)
	W0321 22:05:09.615654  153142 certs.go:397] ignoring /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532_empty.pem, impossibly tiny 0 bytes
	I0321 22:05:09.615664  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca-key.pem (1675 bytes)
	I0321 22:05:09.615697  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/ca.pem (1082 bytes)
	I0321 22:05:09.615732  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/cert.pem (1123 bytes)
	I0321 22:05:09.615761  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/home/jenkins/minikube-integration/16124-3841/.minikube/certs/key.pem (1675 bytes)
	I0321 22:05:09.615818  153142 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem (1708 bytes)
	I0321 22:05:09.615850  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:05:09.615869  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532.pem -> /usr/share/ca-certificates/10532.pem
	I0321 22:05:09.615888  153142 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem -> /usr/share/ca-certificates/105322.pem
	I0321 22:05:09.616290  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0321 22:05:09.632246  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0321 22:05:09.648150  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0321 22:05:09.663428  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0321 22:05:09.678610  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0321 22:05:09.694413  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/certs/10532.pem --> /usr/share/ca-certificates/10532.pem (1338 bytes)
	I0321 22:05:09.709875  153142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/ssl/certs/105322.pem --> /usr/share/ca-certificates/105322.pem (1708 bytes)
	I0321 22:05:09.725404  153142 ssh_runner.go:195] Run: openssl version
	I0321 22:05:09.729460  153142 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0321 22:05:09.729651  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0321 22:05:09.736146  153142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:05:09.738843  153142 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 21 21:50 /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:05:09.738899  153142 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 21 21:50 /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:05:09.738943  153142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:05:09.743066  153142 command_runner.go:130] > b5213941
	I0321 22:05:09.743253  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0321 22:05:09.749742  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10532.pem && ln -fs /usr/share/ca-certificates/10532.pem /etc/ssl/certs/10532.pem"
	I0321 22:05:09.756329  153142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10532.pem
	I0321 22:05:09.758933  153142 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 21 21:53 /usr/share/ca-certificates/10532.pem
	I0321 22:05:09.759014  153142 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 21 21:53 /usr/share/ca-certificates/10532.pem
	I0321 22:05:09.759062  153142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10532.pem
	I0321 22:05:09.763087  153142 command_runner.go:130] > 51391683
	I0321 22:05:09.763252  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10532.pem /etc/ssl/certs/51391683.0"
	I0321 22:05:09.769587  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105322.pem && ln -fs /usr/share/ca-certificates/105322.pem /etc/ssl/certs/105322.pem"
	I0321 22:05:09.776224  153142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105322.pem
	I0321 22:05:09.778861  153142 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 21 21:53 /usr/share/ca-certificates/105322.pem
	I0321 22:05:09.778966  153142 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 21 21:53 /usr/share/ca-certificates/105322.pem
	I0321 22:05:09.779006  153142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105322.pem
	I0321 22:05:09.783165  153142 command_runner.go:130] > 3ec20f2e
	I0321 22:05:09.783322  153142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105322.pem /etc/ssl/certs/3ec20f2e.0"
	I0321 22:05:09.789821  153142 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0321 22:05:09.809969  153142 command_runner.go:130] > cgroupfs
	I0321 22:05:09.810896  153142 cni.go:84] Creating CNI manager for ""
	I0321 22:05:09.810910  153142 cni.go:136] 2 nodes found, recommending kindnet
	I0321 22:05:09.810918  153142 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0321 22:05:09.810935  153142 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-860915 NodeName:multinode-860915-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0321 22:05:09.811040  153142 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-860915-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0321 22:05:09.811092  153142 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-860915-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.2 ClusterName:multinode-860915 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0321 22:05:09.811129  153142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
	I0321 22:05:09.817360  153142 command_runner.go:130] > kubeadm
	I0321 22:05:09.817373  153142 command_runner.go:130] > kubectl
	I0321 22:05:09.817378  153142 command_runner.go:130] > kubelet
	I0321 22:05:09.817873  153142 binaries.go:44] Found k8s binaries, skipping transfer
	I0321 22:05:09.817915  153142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0321 22:05:09.823861  153142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0321 22:05:09.835199  153142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0321 22:05:09.847031  153142 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0321 22:05:09.849827  153142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0321 22:05:09.858525  153142 host.go:66] Checking if "multinode-860915" exists ...
	I0321 22:05:09.858744  153142 config.go:182] Loaded profile config "multinode-860915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0321 22:05:09.858775  153142 start.go:301] JoinCluster: &{Name:multinode-860915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:multinode-860915 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 22:05:09.858873  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0321 22:05:09.858916  153142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:05:09.920238  153142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:05:10.051021  153142 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token rd21xo.u261sww8h66igfey --discovery-token-ca-cert-hash sha256:4d1226b8391a06c26001ed628fb6437347623d36214654a185910576da6ae050 
	I0321 22:05:10.055139  153142 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0321 22:05:10.055179  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rd21xo.u261sww8h66igfey --discovery-token-ca-cert-hash sha256:4d1226b8391a06c26001ed628fb6437347623d36214654a185910576da6ae050 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-860915-m02"
	I0321 22:05:10.089341  153142 command_runner.go:130] ! W0321 22:05:10.089004    1339 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0321 22:05:10.114065  153142 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1030-gcp\n", err: exit status 1
	I0321 22:05:10.178325  153142 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0321 22:05:11.811827  153142 command_runner.go:130] > [preflight] Running pre-flight checks
	I0321 22:05:11.811856  153142 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0321 22:05:11.811868  153142 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1030-gcp
	I0321 22:05:11.811875  153142 command_runner.go:130] > OS: Linux
	I0321 22:05:11.811883  153142 command_runner.go:130] > CGROUPS_CPU: enabled
	I0321 22:05:11.811893  153142 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0321 22:05:11.811905  153142 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0321 22:05:11.811917  153142 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0321 22:05:11.811929  153142 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0321 22:05:11.811940  153142 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0321 22:05:11.811952  153142 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0321 22:05:11.811961  153142 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0321 22:05:11.811972  153142 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0321 22:05:11.811985  153142 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0321 22:05:11.812000  153142 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0321 22:05:11.812014  153142 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0321 22:05:11.812029  153142 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0321 22:05:11.812041  153142 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0321 22:05:11.812060  153142 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0321 22:05:11.812071  153142 command_runner.go:130] > This node has joined the cluster:
	I0321 22:05:11.812081  153142 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0321 22:05:11.812094  153142 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0321 22:05:11.812108  153142 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0321 22:05:11.812135  153142 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rd21xo.u261sww8h66igfey --discovery-token-ca-cert-hash sha256:4d1226b8391a06c26001ed628fb6437347623d36214654a185910576da6ae050 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-860915-m02": (1.756941479s)
	I0321 22:05:11.812157  153142 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0321 22:05:11.984273  153142 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0321 22:05:11.984314  153142 start.go:303] JoinCluster complete in 2.125536119s
	I0321 22:05:11.984327  153142 cni.go:84] Creating CNI manager for ""
	I0321 22:05:11.984334  153142 cni.go:136] 2 nodes found, recommending kindnet
	I0321 22:05:11.984376  153142 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0321 22:05:11.987408  153142 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0321 22:05:11.987431  153142 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0321 22:05:11.987442  153142 command_runner.go:130] > Device: 36h/54d	Inode: 1320614     Links: 1
	I0321 22:05:11.987451  153142 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0321 22:05:11.987460  153142 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0321 22:05:11.987468  153142 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0321 22:05:11.987482  153142 command_runner.go:130] > Change: 2023-03-21 21:49:52.361193928 +0000
	I0321 22:05:11.987490  153142 command_runner.go:130] >  Birth: -
	I0321 22:05:11.987546  153142 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.2/kubectl ...
	I0321 22:05:11.987555  153142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0321 22:05:11.999294  153142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0321 22:05:12.148289  153142 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0321 22:05:12.151758  153142 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0321 22:05:12.153628  153142 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0321 22:05:12.164270  153142 command_runner.go:130] > daemonset.apps/kindnet configured
	I0321 22:05:12.168684  153142 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 22:05:12.168909  153142 kapi.go:59] client config for multinode-860915: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key", CAFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0321 22:05:12.169279  153142 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0321 22:05:12.169296  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.169308  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.169322  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.172517  153142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0321 22:05:12.172547  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.172557  153142 round_trippers.go:580]     Audit-Id: 90714f64-4c46-48d0-b5ee-de7134882a14
	I0321 22:05:12.172567  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.172579  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.172593  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.172606  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.172619  153142 round_trippers.go:580]     Content-Length: 291
	I0321 22:05:12.172632  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.172660  153142 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7330eee3-984a-48af-9eda-cf12dc6be18f","resourceVersion":"429","creationTimestamp":"2023-03-21T22:04:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0321 22:05:12.172752  153142 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-860915" context rescaled to 1 replicas
	I0321 22:05:12.172785  153142 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0321 22:05:12.175983  153142 out.go:177] * Verifying Kubernetes components...
	I0321 22:05:12.177254  153142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0321 22:05:12.186494  153142 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 22:05:12.186734  153142 kapi.go:59] client config for multinode-860915: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/profiles/multinode-860915/client.key", CAFile:"/home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0321 22:05:12.187010  153142 node_ready.go:35] waiting up to 6m0s for node "multinode-860915-m02" to be "Ready" ...
	I0321 22:05:12.187070  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915-m02
	I0321 22:05:12.187080  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.187092  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.187104  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.188533  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.188552  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.188569  153142 round_trippers.go:580]     Audit-Id: da7ae12b-2b46-49cd-863e-ac9aedd72d9a
	I0321 22:05:12.188591  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.188603  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.188613  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.188626  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.188639  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.188746  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915-m02","uid":"a069319f-e992-4fef-b387-9b925cda4dae","resourceVersion":"474","creationTimestamp":"2023-03-21T22:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0321 22:05:12.189119  153142 node_ready.go:49] node "multinode-860915-m02" has status "Ready":"True"
	I0321 22:05:12.189136  153142 node_ready.go:38] duration metric: took 2.111728ms waiting for node "multinode-860915-m02" to be "Ready" ...
	I0321 22:05:12.189146  153142 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0321 22:05:12.189207  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0321 22:05:12.189217  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.189229  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.189244  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.191759  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:05:12.191780  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.191789  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.191798  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.191810  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.191821  153142 round_trippers.go:580]     Audit-Id: eb3e94b9-140a-4b86-91f3-c8fbeeb5d5c7
	I0321 22:05:12.191881  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.191899  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.192303  153142 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"474"},"items":[{"metadata":{"name":"coredns-787d4945fb-wx8p9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8b510dd8-761f-469a-8ccc-d08beb282e56","resourceVersion":"425","creationTimestamp":"2023-03-21T22:04:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65541 chars]
	I0321 22:05:12.194279  153142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-wx8p9" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.194329  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-wx8p9
	I0321 22:05:12.194336  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.194343  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.194350  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.195789  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.195803  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.195810  153142 round_trippers.go:580]     Audit-Id: 3f97851f-367b-4a6b-a6ac-8926fd817b66
	I0321 22:05:12.195817  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.195826  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.195838  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.195848  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.195857  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.195933  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-wx8p9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8b510dd8-761f-469a-8ccc-d08beb282e56","resourceVersion":"425","creationTimestamp":"2023-03-21T22:04:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"62578511-5485-4104-ad26-365c24d0ad0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62578511-5485-4104-ad26-365c24d0ad0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0321 22:05:12.196329  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:12.196343  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.196350  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.196361  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.197734  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.197751  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.197760  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.197769  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.197782  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.197795  153142 round_trippers.go:580]     Audit-Id: ddb33783-1ccc-458f-a392-60a70e6c3cbf
	I0321 22:05:12.197808  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.197821  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.197910  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"430","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0321 22:05:12.198225  153142 pod_ready.go:92] pod "coredns-787d4945fb-wx8p9" in "kube-system" namespace has status "Ready":"True"
	I0321 22:05:12.198238  153142 pod_ready.go:81] duration metric: took 3.94214ms waiting for pod "coredns-787d4945fb-wx8p9" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.198246  153142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.198283  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-860915
	I0321 22:05:12.198289  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.198296  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.198307  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.199697  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.199716  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.199726  153142 round_trippers.go:580]     Audit-Id: ef100106-9fe1-45be-9c19-bb86c99a3711
	I0321 22:05:12.199733  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.199749  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.199757  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.199766  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.199779  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.199873  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-860915","namespace":"kube-system","uid":"8e7f12bc-76cf-4d36-8034-4dadd4ba1c5b","resourceVersion":"277","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"456cd41a0496a9e8bf278e639437f566","kubernetes.io/config.mirror":"456cd41a0496a9e8bf278e639437f566","kubernetes.io/config.seen":"2023-03-21T22:04:28.326473783Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0321 22:05:12.200208  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:12.200220  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.200227  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.200233  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.201559  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.201583  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.201594  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.201605  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.201617  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.201627  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.201638  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.201647  153142 round_trippers.go:580]     Audit-Id: 06492e11-a65f-4f24-b982-b5d3f503425c
	I0321 22:05:12.201725  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"430","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0321 22:05:12.201969  153142 pod_ready.go:92] pod "etcd-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:05:12.201979  153142 pod_ready.go:81] duration metric: took 3.728684ms waiting for pod "etcd-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.201991  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.202061  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-860915
	I0321 22:05:12.202070  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.202077  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.202083  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.203509  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.203527  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.203535  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.203544  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.203556  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.203578  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.203591  153142 round_trippers.go:580]     Audit-Id: 696fcdfd-00a3-404a-8ef2-0b3705606860
	I0321 22:05:12.203604  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.203749  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-860915","namespace":"kube-system","uid":"1f990298-d202-4148-ac4a-b5f713f9fd83","resourceVersion":"274","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"322dc81533eb2822b571df496b71ca36","kubernetes.io/config.mirror":"322dc81533eb2822b571df496b71ca36","kubernetes.io/config.seen":"2023-03-21T22:04:28.326475235Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0321 22:05:12.204122  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:12.204136  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.204146  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.204156  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.205359  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.205372  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.205381  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.205389  153142 round_trippers.go:580]     Audit-Id: fd155a7e-9abc-48bd-9027-a032ba413833
	I0321 22:05:12.205397  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.205406  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.205417  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.205425  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.205490  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"430","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0321 22:05:12.205741  153142 pod_ready.go:92] pod "kube-apiserver-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:05:12.205750  153142 pod_ready.go:81] duration metric: took 3.753653ms waiting for pod "kube-apiserver-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.205758  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.205791  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-860915
	I0321 22:05:12.205798  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.205806  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.205814  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.207261  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.207281  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.207292  153142 round_trippers.go:580]     Audit-Id: 9f6b2c19-f7d5-4722-8af5-461b29116d43
	I0321 22:05:12.207299  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.207305  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.207312  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.207318  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.207324  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.207434  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-860915","namespace":"kube-system","uid":"6c6a8cd8-e27e-40e9-910f-c3d9b56c6882","resourceVersion":"391","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"70cebb20d8a8aec3d68c354124b20828","kubernetes.io/config.mirror":"70cebb20d8a8aec3d68c354124b20828","kubernetes.io/config.seen":"2023-03-21T22:04:28.326453711Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0321 22:05:12.207797  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:12.207811  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.207818  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.207825  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.209041  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.209060  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.209070  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.209079  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.209095  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.209101  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.209108  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.209116  153142 round_trippers.go:580]     Audit-Id: ae5b075f-d639-49d4-9788-1d60acf006bb
	I0321 22:05:12.209176  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"430","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0321 22:05:12.209393  153142 pod_ready.go:92] pod "kube-controller-manager-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:05:12.209403  153142 pod_ready.go:81] duration metric: took 3.637923ms waiting for pod "kube-controller-manager-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.209410  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.387768  153142 request.go:622] Waited for 178.308218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97hnd
	I0321 22:05:12.387812  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97hnd
	I0321 22:05:12.387817  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.387825  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.387831  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.389760  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.389783  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.389791  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.389797  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.389803  153142 round_trippers.go:580]     Audit-Id: baa9643a-bdc0-41af-a67d-65bc0832fff2
	I0321 22:05:12.389808  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.389816  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.389826  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.389917  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-97hnd","generateName":"kube-proxy-","namespace":"kube-system","uid":"a92d55d8-3ec3-4e8e-b31c-f24fcb440600","resourceVersion":"382","creationTimestamp":"2023-03-21T22:04:40Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cda4f06c-b817-4058-b4d0-e429752e2f27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cda4f06c-b817-4058-b4d0-e429752e2f27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0321 22:05:12.587628  153142 request.go:622] Waited for 197.293235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:12.587680  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:12.587686  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.587695  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.587711  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.589622  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.589644  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.589654  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.589666  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.589679  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.589687  153142 round_trippers.go:580]     Audit-Id: 05323963-ef6c-4207-b996-5f204b5fbc0f
	I0321 22:05:12.589696  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.589702  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.589776  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"430","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0321 22:05:12.590100  153142 pod_ready.go:92] pod "kube-proxy-97hnd" in "kube-system" namespace has status "Ready":"True"
	I0321 22:05:12.590113  153142 pod_ready.go:81] duration metric: took 380.697553ms waiting for pod "kube-proxy-97hnd" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.590122  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-slz5b" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:12.787504  153142 request.go:622] Waited for 197.313998ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slz5b
	I0321 22:05:12.787553  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slz5b
	I0321 22:05:12.787558  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.787565  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.787572  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.789325  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.789348  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.789356  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.789363  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.789368  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.789376  153142 round_trippers.go:580]     Audit-Id: 3b693f6b-d217-4696-a571-ca941983c01e
	I0321 22:05:12.789382  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.789390  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.789478  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slz5b","generateName":"kube-proxy-","namespace":"kube-system","uid":"659a0266-e910-4295-970d-58b18791cad1","resourceVersion":"460","creationTimestamp":"2023-03-21T22:05:11Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cda4f06c-b817-4058-b4d0-e429752e2f27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cda4f06c-b817-4058-b4d0-e429752e2f27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0321 22:05:12.987106  153142 request.go:622] Waited for 197.27474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-860915-m02
	I0321 22:05:12.987171  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915-m02
	I0321 22:05:12.987181  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:12.987194  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:12.987205  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:12.988820  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:12.988838  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:12.988845  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:12 GMT
	I0321 22:05:12.988852  153142 round_trippers.go:580]     Audit-Id: bf3c3f92-5f90-44ee-859f-8b2cc3a03905
	I0321 22:05:12.988857  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:12.988863  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:12.988868  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:12.988874  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:12.988994  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915-m02","uid":"a069319f-e992-4fef-b387-9b925cda4dae","resourceVersion":"474","creationTimestamp":"2023-03-21T22:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0321 22:05:13.490129  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slz5b
	I0321 22:05:13.490208  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:13.490224  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:13.490233  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:13.492296  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:05:13.492321  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:13.492332  153142 round_trippers.go:580]     Audit-Id: ceb30675-c858-4576-b4f9-a135d4404669
	I0321 22:05:13.492342  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:13.492354  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:13.492370  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:13.492383  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:13.492393  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:13 GMT
	I0321 22:05:13.492507  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slz5b","generateName":"kube-proxy-","namespace":"kube-system","uid":"659a0266-e910-4295-970d-58b18791cad1","resourceVersion":"460","creationTimestamp":"2023-03-21T22:05:11Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cda4f06c-b817-4058-b4d0-e429752e2f27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cda4f06c-b817-4058-b4d0-e429752e2f27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0321 22:05:13.492898  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915-m02
	I0321 22:05:13.492915  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:13.492925  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:13.492933  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:13.494505  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:13.494531  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:13.494541  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:13.494554  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:13.494564  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:13.494577  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:13 GMT
	I0321 22:05:13.494586  153142 round_trippers.go:580]     Audit-Id: 3b21460b-7e60-4f00-8003-91a45cc0124a
	I0321 22:05:13.494595  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:13.494673  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915-m02","uid":"a069319f-e992-4fef-b387-9b925cda4dae","resourceVersion":"474","creationTimestamp":"2023-03-21T22:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0321 22:05:13.989560  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-slz5b
	I0321 22:05:13.989585  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:13.989598  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:13.989609  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:13.992160  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:05:13.992184  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:13.992194  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:13.992202  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:13.992212  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:13 GMT
	I0321 22:05:13.992226  153142 round_trippers.go:580]     Audit-Id: 8b1330c9-2f1b-4f03-adae-76c8c5e1465f
	I0321 22:05:13.992235  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:13.992244  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:13.992368  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slz5b","generateName":"kube-proxy-","namespace":"kube-system","uid":"659a0266-e910-4295-970d-58b18791cad1","resourceVersion":"483","creationTimestamp":"2023-03-21T22:05:11Z","labels":{"controller-revision-hash":"6646d95c56","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cda4f06c-b817-4058-b4d0-e429752e2f27","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cda4f06c-b817-4058-b4d0-e429752e2f27\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0321 22:05:13.992867  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915-m02
	I0321 22:05:13.992881  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:13.992892  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:13.992902  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:13.994841  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:13.994888  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:13.994910  153142 round_trippers.go:580]     Audit-Id: 46de0112-aced-41d0-8d29-b82716c9e4ab
	I0321 22:05:13.994930  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:13.994952  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:13.994981  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:13.994995  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:13.995005  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:13 GMT
	I0321 22:05:13.995108  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915-m02","uid":"a069319f-e992-4fef-b387-9b925cda4dae","resourceVersion":"474","creationTimestamp":"2023-03-21T22:05:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:05:11Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0321 22:05:13.995435  153142 pod_ready.go:92] pod "kube-proxy-slz5b" in "kube-system" namespace has status "Ready":"True"
	I0321 22:05:13.995459  153142 pod_ready.go:81] duration metric: took 1.405328991s waiting for pod "kube-proxy-slz5b" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:13.995471  153142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:13.995527  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-860915
	I0321 22:05:13.995536  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:13.995548  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:13.995567  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:13.997349  153142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0321 22:05:13.997368  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:13.997377  153142 round_trippers.go:580]     Audit-Id: eed98e3f-58ea-417b-93eb-f80c4b72dca6
	I0321 22:05:13.997385  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:13.997420  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:13.997438  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:13.997452  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:13.997462  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:13 GMT
	I0321 22:05:13.997601  153142 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-860915","namespace":"kube-system","uid":"1a170ba9-55b2-4275-be35-718bde52ddc2","resourceVersion":"272","creationTimestamp":"2023-03-21T22:04:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"689743ba99dabd4496ce52934787709c","kubernetes.io/config.mirror":"689743ba99dabd4496ce52934787709c","kubernetes.io/config.seen":"2023-03-21T22:04:28.326472146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-03-21T22:04:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0321 22:05:14.187283  153142 request.go:622] Waited for 189.274605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:14.187359  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-860915
	I0321 22:05:14.187374  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:14.187386  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:14.187401  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:14.189633  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:05:14.189658  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:14.189674  153142 round_trippers.go:580]     Audit-Id: a2e5bdc6-0a45-49a4-99fb-0faeae0a2b55
	I0321 22:05:14.189683  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:14.189698  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:14.189708  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:14.189717  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:14.189750  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:14 GMT
	I0321 22:05:14.194173  153142 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"430","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-03-21T22:04:25Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0321 22:05:14.195107  153142 pod_ready.go:92] pod "kube-scheduler-multinode-860915" in "kube-system" namespace has status "Ready":"True"
	I0321 22:05:14.195129  153142 pod_ready.go:81] duration metric: took 199.645863ms waiting for pod "kube-scheduler-multinode-860915" in "kube-system" namespace to be "Ready" ...
	I0321 22:05:14.195145  153142 pod_ready.go:38] duration metric: took 2.005987166s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0321 22:05:14.195176  153142 system_svc.go:44] waiting for kubelet service to be running ....
	I0321 22:05:14.195234  153142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0321 22:05:14.205396  153142 system_svc.go:56] duration metric: took 10.214135ms WaitForService to wait for kubelet.
	I0321 22:05:14.205445  153142 kubeadm.go:578] duration metric: took 2.032624025s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0321 22:05:14.205472  153142 node_conditions.go:102] verifying NodePressure condition ...
	I0321 22:05:14.387879  153142 request.go:622] Waited for 182.316479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0321 22:05:14.387949  153142 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0321 22:05:14.387959  153142 round_trippers.go:469] Request Headers:
	I0321 22:05:14.387972  153142 round_trippers.go:473]     Accept: application/json, */*
	I0321 22:05:14.387986  153142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0321 22:05:14.390525  153142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0321 22:05:14.390553  153142 round_trippers.go:577] Response Headers:
	I0321 22:05:14.390564  153142 round_trippers.go:580]     Cache-Control: no-cache, private
	I0321 22:05:14.390573  153142 round_trippers.go:580]     Content-Type: application/json
	I0321 22:05:14.390582  153142 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2758d835-9d74-4960-bb51-f1411eb191c5
	I0321 22:05:14.390591  153142 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3f9e48da-5355-48b1-9e14-4943bd23f602
	I0321 22:05:14.390606  153142 round_trippers.go:580]     Date: Tue, 21 Mar 2023 22:05:14 GMT
	I0321 22:05:14.390615  153142 round_trippers.go:580]     Audit-Id: 848a0bc4-9246-4f49-babc-f2e3f61ade69
	I0321 22:05:14.390812  153142 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"484"},"items":[{"metadata":{"name":"multinode-860915","uid":"e767a218-637f-404e-afcb-ad2752a753cd","resourceVersion":"430","creationTimestamp":"2023-03-21T22:04:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-860915","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b6238450160ebd3d5010da9938125282f0eedd4","minikube.k8s.io/name":"multinode-860915","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_03_21T22_04_29_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10265 chars]
	I0321 22:05:14.391439  153142 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0321 22:05:14.391460  153142 node_conditions.go:123] node cpu capacity is 8
	I0321 22:05:14.391473  153142 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0321 22:05:14.391479  153142 node_conditions.go:123] node cpu capacity is 8
	I0321 22:05:14.391493  153142 node_conditions.go:105] duration metric: took 186.014843ms to run NodePressure ...
	I0321 22:05:14.391506  153142 start.go:228] waiting for startup goroutines ...
	I0321 22:05:14.391537  153142 start.go:242] writing updated cluster config ...
	I0321 22:05:14.391868  153142 ssh_runner.go:195] Run: rm -f paused
	I0321 22:05:14.451412  153142 start.go:554] kubectl: 1.26.3, cluster: 1.26.2 (minor skew: 0)
	I0321 22:05:14.454591  153142 out.go:177] * Done! kubectl is now configured to use "multinode-860915" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2023-03-21 22:04:10 UTC, end at Tue 2023-03-21 22:05:22 UTC. --
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206009200Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206059904Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206073175Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206111636Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206147676Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206184263Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206218518Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206263353Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206274299Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206479984Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206497570Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.206898010Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.218797406Z" level=info msg="Loading containers: start."
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.295182813Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.329329652Z" level=info msg="Loading containers: done."
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.338331883Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.338393506Z" level=info msg="Daemon has completed initialization"
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.351621777Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Mar 21 22:04:14 multinode-860915 systemd[1]: Started Docker Application Container Engine.
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.358309509Z" level=info msg="API listen on [::]:2376"
	Mar 21 22:04:14 multinode-860915 dockerd[940]: time="2023-03-21T22:04:14.362501525Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 21 22:04:55 multinode-860915 dockerd[940]: time="2023-03-21T22:04:55.796361892Z" level=info msg="ignoring event" container=7c6e7c8c7ec621057c81df63b5d132049342a86d796301973478bac4d02e921e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 21 22:04:55 multinode-860915 dockerd[940]: time="2023-03-21T22:04:55.801023400Z" level=info msg="ignoring event" container=deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 21 22:04:55 multinode-860915 dockerd[940]: time="2023-03-21T22:04:55.882785681Z" level=info msg="ignoring event" container=0423e44126334b2958e61f5a0eb34ce609aa11ffbe722b96e164a3d05c2e7916 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 21 22:04:55 multinode-860915 dockerd[940]: time="2023-03-21T22:04:55.883249266Z" level=info msg="ignoring event" container=e50f99e6f9cfd51059dcc8542745b29f02e9d721eff6cd9e23db2f1d61b33cc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	4491dcfe0bce9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   7 seconds ago        Running             busybox                   0                   fe30fe956e470
	e04dd78d38779       5185b96f0becf                                                                                         27 seconds ago       Running             coredns                   1                   82abc3f5c9e00
	f0c8fd3eab736       kindest/kindnetd@sha256:7fc2671641a1a7e7b9b8341964bd7cfe9018f497dc41d58803f88b0cc4030e07              40 seconds ago       Running             kindnet-cni               0                   54b60a934e259
	835951ebc3451       6e38f40d628db                                                                                         41 seconds ago       Running             storage-provisioner       0                   184457719bde9
	7c6e7c8c7ec62       5185b96f0becf                                                                                         41 seconds ago       Exited              coredns                   0                   0423e44126334
	a42f910ebb092       6f64e7135a6ec                                                                                         42 seconds ago       Running             kube-proxy                0                   4c2c84241a96a
	c175274409c12       db8f409d9a5d7                                                                                         About a minute ago   Running             kube-scheduler            0                   1f08e1f037a2b
	ab8122344f03b       240e201d5b0d8                                                                                         About a minute ago   Running             kube-controller-manager   0                   3ac6bbe183a33
	7ed7891684791       fce326961ae2d                                                                                         About a minute ago   Running             etcd                      0                   adcbef8542b28
	ee6e07b4a24f5       63d3239c3c159                                                                                         About a minute ago   Running             kube-apiserver            0                   419ff89329152
	
	* 
	* ==> coredns [7c6e7c8c7ec6] <==
	* [INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] 127.0.0.1:39101 - 6366 "HINFO IN 7462099254160572882.2590707446242915087. udp 57 false 512" - - 0 5.000109855s
	[ERROR] plugin/errors: 2 7462099254160572882.2590707446242915087. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:53000 - 29472 "HINFO IN 7462099254160572882.2590707446242915087. udp 57 false 512" - - 0 5.000371401s
	[ERROR] plugin/errors: 2 7462099254160572882.2590707446242915087. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	
	* 
	* ==> coredns [e04dd78d3877] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:55966 - 28057 "HINFO IN 5623652034396813179.2270563698707063122. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010892857s
	[INFO] 10.244.0.3:49401 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231647s
	[INFO] 10.244.0.3:38352 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.012664673s
	[INFO] 10.244.0.3:43959 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.03237357s
	[INFO] 10.244.0.3:34396 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.009886521s
	[INFO] 10.244.0.3:56332 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168549s
	[INFO] 10.244.0.3:43007 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008105178s
	[INFO] 10.244.0.3:34301 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000169373s
	[INFO] 10.244.0.3:55270 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115364s
	[INFO] 10.244.0.3:34913 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.007978397s
	[INFO] 10.244.0.3:33479 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131614s
	[INFO] 10.244.0.3:60727 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139266s
	[INFO] 10.244.0.3:53647 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010849s
	[INFO] 10.244.0.3:35451 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161481s
	[INFO] 10.244.0.3:54753 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128824s
	[INFO] 10.244.0.3:41932 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090076s
	[INFO] 10.244.0.3:59019 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075913s
	[INFO] 10.244.0.3:35722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013547s
	[INFO] 10.244.0.3:54275 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140185s
	[INFO] 10.244.0.3:37919 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112078s
	[INFO] 10.244.0.3:39231 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130827s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-860915
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-860915
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b6238450160ebd3d5010da9938125282f0eedd4
	                    minikube.k8s.io/name=multinode-860915
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_03_21T22_04_29_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 21 Mar 2023 22:04:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-860915
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 21 Mar 2023 22:05:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 21 Mar 2023 22:04:59 +0000   Tue, 21 Mar 2023 22:04:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 21 Mar 2023 22:04:59 +0000   Tue, 21 Mar 2023 22:04:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 21 Mar 2023 22:04:59 +0000   Tue, 21 Mar 2023 22:04:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 21 Mar 2023 22:04:59 +0000   Tue, 21 Mar 2023 22:04:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-860915
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 b249c14bbd9147e887f6315aff00ef06
	  System UUID:                9af59bff-4966-4419-8b87-bcc5c593d400
	  Boot ID:                    527d7f15-1c0f-42e6-b299-1ad744c7814d
	  Kernel Version:             5.15.0-1030-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.2
	  Kube-Proxy Version:         v1.26.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-62ggt                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-787d4945fb-wx8p9                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     42s
	  kube-system                 etcd-multinode-860915                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         55s
	  kube-system                 kindnet-wnjrv                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      43s
	  kube-system                 kube-apiserver-multinode-860915             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 kube-controller-manager-multinode-860915    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 kube-proxy-97hnd                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kube-scheduler-multinode-860915             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 41s   kube-proxy       
	  Normal  Starting                 55s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s   kubelet          Node multinode-860915 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s   kubelet          Node multinode-860915 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s   kubelet          Node multinode-860915 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  55s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                55s   kubelet          Node multinode-860915 status is now: NodeReady
	  Normal  RegisteredNode           43s   node-controller  Node multinode-860915 event: Registered Node multinode-860915 in Controller
	
	
	Name:               multinode-860915-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-860915-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 21 Mar 2023 22:05:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-860915-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 21 Mar 2023 22:05:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 21 Mar 2023 22:05:11 +0000   Tue, 21 Mar 2023 22:05:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 21 Mar 2023 22:05:11 +0000   Tue, 21 Mar 2023 22:05:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 21 Mar 2023 22:05:11 +0000   Tue, 21 Mar 2023 22:05:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 21 Mar 2023 22:05:11 +0000   Tue, 21 Mar 2023 22:05:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-860915-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 b249c14bbd9147e887f6315aff00ef06
	  System UUID:                e9904412-06ae-49de-b45e-7c9d93a2667a
	  Boot ID:                    527d7f15-1c0f-42e6-b299-1ad744c7814d
	  Kernel Version:             5.15.0-1030-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.2
	  Kube-Proxy Version:         v1.26.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-kpfz8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-mhzgv               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12s
	  kube-system                 kube-proxy-slz5b            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9s                 kube-proxy       
	  Normal  Starting                 13s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)  kubelet          Node multinode-860915-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)  kubelet          Node multinode-860915-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)  kubelet          Node multinode-860915-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                12s                kubelet          Node multinode-860915-m02 status is now: NodeReady
	  Normal  RegisteredNode           8s                 node-controller  Node multinode-860915-m02 event: Registered Node multinode-860915-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.008751] FS-Cache: O-key=[8] '8aa00f0200000000'
	[  +0.006306] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007948] FS-Cache: N-cookie d=00000000a1a4eac5{9p.inode} n=00000000dcc40d61
	[  +0.008741] FS-Cache: N-key=[8] '8aa00f0200000000'
	[  +3.771261] FS-Cache: Duplicate cookie detected
	[  +0.004700] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006768] FS-Cache: O-cookie d=00000000a1a4eac5{9p.inode} n=000000000e884808
	[  +0.007355] FS-Cache: O-key=[8] '89a00f0200000000'
	[  +0.004937] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006574] FS-Cache: N-cookie d=00000000a1a4eac5{9p.inode} n=0000000022acdc3c
	[  +0.008733] FS-Cache: N-key=[8] '89a00f0200000000'
	[  +0.556387] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006759] FS-Cache: O-cookie d=00000000a1a4eac5{9p.inode} n=0000000085e909c2
	[  +0.007366] FS-Cache: O-key=[8] '93a00f0200000000'
	[  +0.004963] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006614] FS-Cache: N-cookie d=00000000a1a4eac5{9p.inode} n=0000000099ed53d5
	[  +0.007364] FS-Cache: N-key=[8] '93a00f0200000000'
	[Mar21 21:57] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Mar21 21:59] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 e9 15 e2 76 16 08 06
	[  +0.002605] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 32 7d 1c 33 67 c6 08 06
	[Mar21 22:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de be 01 a6 f6 f0 08 06
	
	* 
	* ==> etcd [7ed789168479] <==
	* {"level":"info","ts":"2023-03-21T22:04:22.772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-03-21T22:04:22.773Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-03-21T22:04:22.774Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-03-21T22:04:22.774Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-03-21T22:04:22.774Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-03-21T22:04:22.774Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-21T22:04:22.774Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-21T22:04:23.602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-03-21T22:04:23.602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-03-21T22:04:23.602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-03-21T22:04:23.602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-03-21T22:04:23.602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-03-21T22:04:23.602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-03-21T22:04:23.602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-03-21T22:04:23.603Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-860915 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-21T22:04:23.603Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-21T22:04:23.603Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-21T22:04:23.603Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-21T22:04:23.604Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-21T22:04:23.604Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-21T22:04:23.604Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-21T22:04:23.604Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-21T22:04:23.604Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-21T22:04:23.605Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-03-21T22:04:23.605Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  22:05:23 up 47 min,  0 users,  load average: 2.63, 2.25, 1.50
	Linux multinode-860915 5.15.0-1030-gcp #37~20.04.1-Ubuntu SMP Mon Feb 20 04:30:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [f0c8fd3eab73] <==
	* I0321 22:04:43.870241       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0321 22:04:43.870296       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0321 22:04:43.870447       1 main.go:116] setting mtu 1500 for CNI 
	I0321 22:04:43.870470       1 main.go:146] kindnetd IP family: "ipv4"
	I0321 22:04:43.870489       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0321 22:04:44.168162       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0321 22:04:44.168196       1 main.go:227] handling current node
	I0321 22:04:54.181052       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0321 22:04:54.181085       1 main.go:227] handling current node
	I0321 22:05:04.192655       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0321 22:05:04.192678       1 main.go:227] handling current node
	I0321 22:05:14.197016       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0321 22:05:14.197046       1 main.go:227] handling current node
	I0321 22:05:14.197063       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0321 22:05:14.197069       1 main.go:250] Node multinode-860915-m02 has CIDR [10.244.1.0/24] 
	I0321 22:05:14.197263       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [ee6e07b4a24f] <==
	* I0321 22:04:25.299627       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0321 22:04:25.299637       1 cache.go:39] Caches are synced for autoregister controller
	I0321 22:04:25.299735       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0321 22:04:25.299866       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0321 22:04:25.299874       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0321 22:04:25.299963       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0321 22:04:25.300837       1 shared_informer.go:280] Caches are synced for configmaps
	I0321 22:04:25.302522       1 controller.go:615] quota admission added evaluator for: namespaces
	I0321 22:04:25.313078       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0321 22:04:25.994263       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0321 22:04:26.204898       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0321 22:04:26.208406       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0321 22:04:26.208419       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0321 22:04:26.629392       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0321 22:04:26.662131       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0321 22:04:26.787922       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0321 22:04:26.793124       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0321 22:04:26.793939       1 controller.go:615] quota admission added evaluator for: endpoints
	I0321 22:04:26.797341       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0321 22:04:27.281198       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0321 22:04:28.256855       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0321 22:04:28.265783       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0321 22:04:28.273834       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0321 22:04:40.973793       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0321 22:04:41.075289       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [ab8122344f03] <==
	* I0321 22:04:40.286809       1 shared_informer.go:280] Caches are synced for job
	I0321 22:04:40.324329       1 shared_informer.go:280] Caches are synced for resource quota
	I0321 22:04:40.330466       1 shared_informer.go:280] Caches are synced for cronjob
	I0321 22:04:40.336630       1 shared_informer.go:280] Caches are synced for TTL after finished
	I0321 22:04:40.340833       1 shared_informer.go:280] Caches are synced for resource quota
	I0321 22:04:40.649331       1 shared_informer.go:280] Caches are synced for garbage collector
	I0321 22:04:40.649353       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0321 22:04:40.670096       1 shared_informer.go:280] Caches are synced for garbage collector
	I0321 22:04:40.983575       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-wnjrv"
	I0321 22:04:40.985047       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-97hnd"
	I0321 22:04:41.078943       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 2"
	I0321 22:04:41.095163       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0321 22:04:41.180642       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-69rb6"
	I0321 22:04:41.188616       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-wx8p9"
	I0321 22:04:41.280282       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-69rb6"
	W0321 22:05:11.114752       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-860915-m02" does not exist
	I0321 22:05:11.120679       1 range_allocator.go:372] Set node multinode-860915-m02 PodCIDR to [10.244.1.0/24]
	I0321 22:05:11.123318       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mhzgv"
	I0321 22:05:11.123351       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-slz5b"
	W0321 22:05:11.826761       1 topologycache.go:232] Can't get CPU or zone information for multinode-860915-m02 node
	W0321 22:05:15.102291       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-860915-m02. Assuming now as a timestamp.
	I0321 22:05:15.102331       1 event.go:294] "Event occurred" object="multinode-860915-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-860915-m02 event: Registered Node multinode-860915-m02 in Controller"
	I0321 22:05:15.477035       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0321 22:05:15.484324       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-kpfz8"
	I0321 22:05:15.489077       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-62ggt"
	
	* 
	* ==> kube-proxy [a42f910ebb09] <==
	* I0321 22:04:41.748662       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0321 22:04:41.748721       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0321 22:04:41.748742       1 server_others.go:535] "Using iptables proxy"
	I0321 22:04:41.766404       1 server_others.go:176] "Using iptables Proxier"
	I0321 22:04:41.766436       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0321 22:04:41.766444       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0321 22:04:41.766461       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0321 22:04:41.766487       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0321 22:04:41.766789       1 server.go:655] "Version info" version="v1.26.2"
	I0321 22:04:41.766805       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0321 22:04:41.767258       1 config.go:317] "Starting service config controller"
	I0321 22:04:41.767295       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0321 22:04:41.767688       1 config.go:444] "Starting node config controller"
	I0321 22:04:41.767713       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0321 22:04:41.767257       1 config.go:226] "Starting endpoint slice config controller"
	I0321 22:04:41.768601       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0321 22:04:41.868114       1 shared_informer.go:280] Caches are synced for node config
	I0321 22:04:41.868839       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0321 22:04:41.868848       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [c175274409c1] <==
	* W0321 22:04:25.285904       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0321 22:04:25.285921       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0321 22:04:25.286147       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0321 22:04:25.286169       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0321 22:04:26.096488       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0321 22:04:26.096520       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0321 22:04:26.243103       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0321 22:04:26.243127       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0321 22:04:26.279154       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0321 22:04:26.279183       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0321 22:04:26.291550       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0321 22:04:26.291590       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0321 22:04:26.321706       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0321 22:04:26.321783       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0321 22:04:26.350813       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0321 22:04:26.350852       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0321 22:04:26.363774       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0321 22:04:26.363810       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0321 22:04:26.430510       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0321 22:04:26.430539       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0321 22:04:26.444359       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0321 22:04:26.444378       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0321 22:04:26.473821       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0321 22:04:26.473851       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0321 22:04:26.782302       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2023-03-21 22:04:10 UTC, end at Tue 2023-03-21 22:05:23 UTC. --
	Mar 21 22:04:48 multinode-860915 kubelet[2309]: I0321 22:04:48.833795    2309 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 21 22:04:48 multinode-860915 kubelet[2309]: I0321 22:04:48.835036    2309 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.096380    2309 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5g5m\" (UniqueName: \"kubernetes.io/projected/a381b86f-bcde-484c-878c-056280374301-kube-api-access-j5g5m\") pod \"a381b86f-bcde-484c-878c-056280374301\" (UID: \"a381b86f-bcde-484c-878c-056280374301\") "
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.096450    2309 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a381b86f-bcde-484c-878c-056280374301-config-volume\") pod \"a381b86f-bcde-484c-878c-056280374301\" (UID: \"a381b86f-bcde-484c-878c-056280374301\") "
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: W0321 22:04:56.096630    2309 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a381b86f-bcde-484c-878c-056280374301/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.096790    2309 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a381b86f-bcde-484c-878c-056280374301-config-volume" (OuterVolumeSpecName: "config-volume") pod "a381b86f-bcde-484c-878c-056280374301" (UID: "a381b86f-bcde-484c-878c-056280374301"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.098439    2309 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a381b86f-bcde-484c-878c-056280374301-kube-api-access-j5g5m" (OuterVolumeSpecName: "kube-api-access-j5g5m") pod "a381b86f-bcde-484c-878c-056280374301" (UID: "a381b86f-bcde-484c-878c-056280374301"). InnerVolumeSpecName "kube-api-access-j5g5m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.197307    2309 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a381b86f-bcde-484c-878c-056280374301-config-volume\") on node \"multinode-860915\" DevicePath \"\""
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.197340    2309 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-j5g5m\" (UniqueName: \"kubernetes.io/projected/a381b86f-bcde-484c-878c-056280374301-kube-api-access-j5g5m\") on node \"multinode-860915\" DevicePath \"\""
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.224811    2309 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0423e44126334b2958e61f5a0eb34ce609aa11ffbe722b96e164a3d05c2e7916"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.229421    2309 scope.go:115] "RemoveContainer" containerID="deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.243073    2309 scope.go:115] "RemoveContainer" containerID="deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: E0321 22:04:56.243799    2309 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3" containerID="deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.243852    2309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3} err="failed to get container status \"deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3\": rpc error: code = Unknown desc = Error: No such container: deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: E0321 22:04:56.401038    2309 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3" containerID="deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: E0321 22:04:56.401107    2309 kuberuntime_container.go:714] "Container termination failed with gracePeriod" err="rpc error: code = Unknown desc = Error response from daemon: No such container: deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3" pod="kube-system/coredns-787d4945fb-69rb6" podUID=a381b86f-bcde-484c-878c-056280374301 containerName="coredns" containerID="docker://deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3" gracePeriod=1
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: E0321 22:04:56.401134    2309 kuberuntime_container.go:739] "Kill container failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3" pod="kube-system/coredns-787d4945fb-69rb6" podUID=a381b86f-bcde-484c-878c-056280374301 containerName="coredns" containerID={Type:docker ID:deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3}
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: E0321 22:04:56.405508    2309 kubelet.go:1874] failed to "KillContainer" for "coredns" with KillContainerError: "rpc error: code = Unknown desc = Error response from daemon: No such container: deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3"
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: E0321 22:04:56.405554    2309 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"coredns\" with KillContainerError: \"rpc error: code = Unknown desc = Error response from daemon: No such container: deddc3ae6b67a8b286da071e4da67d06ace02cc73a7c70e5bc895304194d3ae3\"" pod="kube-system/coredns-787d4945fb-69rb6" podUID=a381b86f-bcde-484c-878c-056280374301
	Mar 21 22:04:56 multinode-860915 kubelet[2309]: I0321 22:04:56.407115    2309 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a381b86f-bcde-484c-878c-056280374301 path="/var/lib/kubelet/pods/a381b86f-bcde-484c-878c-056280374301/volumes"
	Mar 21 22:05:15 multinode-860915 kubelet[2309]: I0321 22:05:15.493289    2309 topology_manager.go:210] "Topology Admit Handler"
	Mar 21 22:05:15 multinode-860915 kubelet[2309]: E0321 22:05:15.493380    2309 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a381b86f-bcde-484c-878c-056280374301" containerName="coredns"
	Mar 21 22:05:15 multinode-860915 kubelet[2309]: I0321 22:05:15.493420    2309 memory_manager.go:346] "RemoveStaleState removing state" podUID="a381b86f-bcde-484c-878c-056280374301" containerName="coredns"
	Mar 21 22:05:15 multinode-860915 kubelet[2309]: I0321 22:05:15.606453    2309 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b44dp\" (UniqueName: \"kubernetes.io/projected/ebd8bedf-1c50-4a50-bb45-ad2ffcf8e054-kube-api-access-b44dp\") pod \"busybox-6b86dd6d48-62ggt\" (UID: \"ebd8bedf-1c50-4a50-bb45-ad2ffcf8e054\") " pod="default/busybox-6b86dd6d48-62ggt"
	Mar 21 22:05:17 multinode-860915 kubelet[2309]: I0321 22:05:17.366362    2309 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-62ggt" podStartSLOduration=-9.223372034488459e+09 pod.CreationTimestamp="2023-03-21 22:05:15 +0000 UTC" firstStartedPulling="2023-03-21 22:05:16.043973735 +0000 UTC m=+47.807395569" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-21 22:05:17.365874778 +0000 UTC m=+49.129296630" watchObservedRunningTime="2023-03-21 22:05:17.366316358 +0000 UTC m=+49.129738208"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-860915 -n multinode-860915
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-860915 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.08s)

                                                
                                    

Test pass (292/313)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.57
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.26.2/json-events 5.64
11 TestDownloadOnly/v1.26.2/preload-exists 0
15 TestDownloadOnly/v1.26.2/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.67
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.38
18 TestDownloadOnlyKic 1.77
19 TestBinaryMirror 1.23
20 TestOffline 75.76
22 TestAddons/Setup 104.5
24 TestAddons/parallel/Registry 14.95
25 TestAddons/parallel/Ingress 19.61
26 TestAddons/parallel/MetricsServer 5.6
27 TestAddons/parallel/HelmTiller 10.41
29 TestAddons/parallel/CSI 53.12
30 TestAddons/parallel/Headlamp 12.21
31 TestAddons/parallel/CloudSpanner 5.44
34 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/StoppedEnableDisable 11.07
36 TestCertOptions 36.14
37 TestCertExpiration 244.04
38 TestDockerFlags 35.76
39 TestForceSystemdFlag 33.65
40 TestForceSystemdEnv 33.72
41 TestKVMDriverInstallOrUpdate 1.75
45 TestErrorSpam/setup 27
46 TestErrorSpam/start 1.12
47 TestErrorSpam/status 1.45
48 TestErrorSpam/pause 1.58
49 TestErrorSpam/unpause 1.59
50 TestErrorSpam/stop 11.23
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 49.81
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 42.02
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.07
61 TestFunctional/serial/CacheCmd/cache/add_remote 2.71
62 TestFunctional/serial/CacheCmd/cache/add_local 0.87
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
64 TestFunctional/serial/CacheCmd/cache/list 0.05
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.46
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.04
67 TestFunctional/serial/CacheCmd/cache/delete 0.1
68 TestFunctional/serial/MinikubeKubectlCmd 0.11
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
70 TestFunctional/serial/ExtraConfig 41.5
71 TestFunctional/serial/ComponentHealth 0.07
72 TestFunctional/serial/LogsCmd 1.22
73 TestFunctional/serial/LogsFileCmd 1.26
75 TestFunctional/parallel/ConfigCmd 0.38
76 TestFunctional/parallel/DashboardCmd 10.95
77 TestFunctional/parallel/DryRun 0.68
78 TestFunctional/parallel/InternationalLanguage 0.27
79 TestFunctional/parallel/StatusCmd 1.59
83 TestFunctional/parallel/ServiceCmdConnect 7.96
84 TestFunctional/parallel/AddonsCmd 0.2
85 TestFunctional/parallel/PersistentVolumeClaim 39.55
87 TestFunctional/parallel/SSHCmd 1.23
88 TestFunctional/parallel/CpCmd 2.38
89 TestFunctional/parallel/MySQL 26.25
90 TestFunctional/parallel/FileSync 0.5
91 TestFunctional/parallel/CertSync 3.7
95 TestFunctional/parallel/NodeLabels 0.07
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
99 TestFunctional/parallel/License 0.18
100 TestFunctional/parallel/Version/short 0.06
101 TestFunctional/parallel/Version/components 1.07
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.39
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.39
106 TestFunctional/parallel/ImageCommands/ImageBuild 3.96
107 TestFunctional/parallel/ImageCommands/Setup 1.22
108 TestFunctional/parallel/DockerEnv/bash 2.23
109 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.07
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.3
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.31
113 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.3
118 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.75
119 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.05
120 TestFunctional/parallel/ServiceCmd/List 0.57
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
122 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.88
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.66
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
131 TestFunctional/parallel/ServiceCmd/Format 0.63
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.25
133 TestFunctional/parallel/ServiceCmd/URL 0.65
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.66
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.62
136 TestFunctional/parallel/ProfileCmd/profile_list 0.55
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.56
138 TestFunctional/parallel/MountCmd/any-port 14.02
139 TestFunctional/parallel/MountCmd/specific-port 3.14
140 TestFunctional/delete_addon-resizer_images 0.16
141 TestFunctional/delete_my-image_image 0.06
142 TestFunctional/delete_minikube_cached_images 0.06
146 TestImageBuild/serial/NormalBuild 0.87
147 TestImageBuild/serial/BuildWithBuildArg 1.03
148 TestImageBuild/serial/BuildWithDockerIgnore 0.44
149 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.37
152 TestIngressAddonLegacy/StartLegacyK8sCluster 56.22
154 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.16
155 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.43
156 TestIngressAddonLegacy/serial/ValidateIngressAddons 30.23
159 TestJSONOutput/start/Command 43.24
160 TestJSONOutput/start/Audit 0
162 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/pause/Command 0.64
166 TestJSONOutput/pause/Audit 0
168 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/unpause/Command 0.6
172 TestJSONOutput/unpause/Audit 0
174 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/stop/Command 5.88
178 TestJSONOutput/stop/Audit 0
180 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
182 TestErrorJSONOutput 0.44
184 TestKicCustomNetwork/create_custom_network 29.29
185 TestKicCustomNetwork/use_default_bridge_network 29.08
186 TestKicExistingNetwork 29.31
187 TestKicCustomSubnet 28.16
188 TestKicStaticIP 28.54
189 TestMainNoArgs 0.04
190 TestMinikubeProfile 59.3
193 TestMountStart/serial/StartWithMountFirst 7.27
194 TestMountStart/serial/VerifyMountFirst 0.44
195 TestMountStart/serial/StartWithMountSecond 7.08
196 TestMountStart/serial/VerifyMountSecond 0.45
197 TestMountStart/serial/DeleteFirst 2.07
198 TestMountStart/serial/VerifyMountPostDelete 0.44
199 TestMountStart/serial/Stop 1.38
200 TestMountStart/serial/RestartStopped 7.9
201 TestMountStart/serial/VerifyMountPostStop 0.45
204 TestMultiNode/serial/FreshStart2Nodes 71.56
207 TestMultiNode/serial/AddNode 17.58
208 TestMultiNode/serial/ProfileList 0.46
209 TestMultiNode/serial/CopyFile 15.81
210 TestMultiNode/serial/StopNode 3.03
211 TestMultiNode/serial/StartAfterStop 12.46
212 TestMultiNode/serial/RestartKeepsNodes 118.1
213 TestMultiNode/serial/DeleteNode 6.07
214 TestMultiNode/serial/StopMultiNode 22.09
215 TestMultiNode/serial/RestartMultiNode 59.23
216 TestMultiNode/serial/ValidateNameConflict 29.4
221 TestPreload 149.11
223 TestScheduledStopUnix 102.39
224 TestSkaffold 60.05
226 TestInsufficientStorage 12.78
227 TestRunningBinaryUpgrade 78.96
229 TestKubernetesUpgrade 379.03
230 TestMissingContainerUpgrade 107.25
231 TestStoppedBinaryUpgrade/Setup 0.55
232 TestStoppedBinaryUpgrade/Upgrade 74.98
233 TestStoppedBinaryUpgrade/MinikubeLogs 1.6
242 TestPause/serial/Start 47.72
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
245 TestNoKubernetes/serial/StartWithK8s 31.86
246 TestPause/serial/SecondStartNoReconfiguration 34.83
247 TestNoKubernetes/serial/StartWithStopK8s 7.31
248 TestNoKubernetes/serial/Start 7.68
249 TestNoKubernetes/serial/VerifyK8sNotRunning 0.47
250 TestNoKubernetes/serial/ProfileList 16.32
262 TestPause/serial/Pause 0.72
263 TestPause/serial/VerifyStatus 0.55
264 TestPause/serial/Unpause 1.12
265 TestPause/serial/PauseAgain 1.12
266 TestPause/serial/DeletePaused 2.85
267 TestPause/serial/VerifyDeletedResources 1.01
268 TestNoKubernetes/serial/Stop 1.48
269 TestNoKubernetes/serial/StartNoArgs 7.47
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.49
272 TestStartStop/group/old-k8s-version/serial/FirstStart 114.76
274 TestStartStop/group/no-preload/serial/FirstStart 52.36
275 TestStartStop/group/no-preload/serial/DeployApp 7.31
276 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.75
277 TestStartStop/group/no-preload/serial/Stop 10.94
278 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
279 TestStartStop/group/no-preload/serial/SecondStart 582.18
280 TestStartStop/group/old-k8s-version/serial/DeployApp 7.39
281 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.81
282 TestStartStop/group/old-k8s-version/serial/Stop 10.84
283 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
284 TestStartStop/group/old-k8s-version/serial/SecondStart 34.58
286 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.78
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 23.01
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.5
290 TestStartStop/group/old-k8s-version/serial/Pause 3.54
292 TestStartStop/group/newest-cni/serial/FirstStart 41.31
293 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.45
294 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.76
295 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.06
297 TestStartStop/group/embed-certs/serial/FirstStart 45.36
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.3
299 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 567.31
300 TestStartStop/group/newest-cni/serial/DeployApp 0
301 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.67
302 TestStartStop/group/newest-cni/serial/Stop 11.1
303 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
304 TestStartStop/group/newest-cni/serial/SecondStart 27.71
305 TestStartStop/group/embed-certs/serial/DeployApp 7.34
306 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.77
307 TestStartStop/group/embed-certs/serial/Stop 11.13
308 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.36
309 TestStartStop/group/embed-certs/serial/SecondStart 564.03
310 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
311 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
312 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.56
313 TestStartStop/group/newest-cni/serial/Pause 3.57
314 TestNetworkPlugins/group/auto/Start 42.88
315 TestNetworkPlugins/group/auto/KubeletFlags 0.48
316 TestNetworkPlugins/group/auto/NetCatPod 10.23
317 TestNetworkPlugins/group/auto/DNS 0.15
318 TestNetworkPlugins/group/auto/Localhost 0.13
319 TestNetworkPlugins/group/auto/HairPin 0.14
320 TestNetworkPlugins/group/enable-default-cni/Start 55.08
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.55
322 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.24
323 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
324 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
325 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
326 TestNetworkPlugins/group/kindnet/Start 56.44
327 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
328 TestNetworkPlugins/group/kindnet/KubeletFlags 0.48
329 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
330 TestNetworkPlugins/group/kindnet/DNS 0.18
331 TestNetworkPlugins/group/kindnet/Localhost 0.16
332 TestNetworkPlugins/group/kindnet/HairPin 0.15
333 TestNetworkPlugins/group/calico/Start 71.95
334 TestNetworkPlugins/group/calico/ControllerPod 5.02
335 TestNetworkPlugins/group/calico/KubeletFlags 0.46
336 TestNetworkPlugins/group/calico/NetCatPod 9.25
337 TestNetworkPlugins/group/calico/DNS 0.16
338 TestNetworkPlugins/group/calico/Localhost 0.14
339 TestNetworkPlugins/group/calico/HairPin 0.14
340 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
341 TestNetworkPlugins/group/custom-flannel/Start 58.72
342 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
343 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.56
344 TestStartStop/group/no-preload/serial/Pause 3.83
345 TestNetworkPlugins/group/flannel/Start 56.11
346 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.54
347 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.25
348 TestNetworkPlugins/group/custom-flannel/DNS 0.18
349 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
350 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
351 TestNetworkPlugins/group/flannel/ControllerPod 5.02
352 TestNetworkPlugins/group/flannel/KubeletFlags 0.52
353 TestNetworkPlugins/group/flannel/NetCatPod 9.23
354 TestNetworkPlugins/group/flannel/DNS 0.16
355 TestNetworkPlugins/group/flannel/Localhost 0.14
356 TestNetworkPlugins/group/flannel/HairPin 0.14
357 TestNetworkPlugins/group/false/Start 48.75
358 TestNetworkPlugins/group/bridge/Start 82.58
359 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
361 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.49
362 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.64
363 TestNetworkPlugins/group/kubenet/Start 53.36
364 TestNetworkPlugins/group/false/KubeletFlags 0.52
365 TestNetworkPlugins/group/false/NetCatPod 9.23
366 TestNetworkPlugins/group/false/DNS 0.16
367 TestNetworkPlugins/group/false/Localhost 0.14
368 TestNetworkPlugins/group/false/HairPin 0.15
369 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
370 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
371 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.52
372 TestStartStop/group/embed-certs/serial/Pause 3.45
373 TestNetworkPlugins/group/kubenet/KubeletFlags 0.47
374 TestNetworkPlugins/group/kubenet/NetCatPod 10.19
375 TestNetworkPlugins/group/bridge/KubeletFlags 0.46
376 TestNetworkPlugins/group/bridge/NetCatPod 10.19
377 TestNetworkPlugins/group/kubenet/DNS 0.17
378 TestNetworkPlugins/group/kubenet/Localhost 0.15
379 TestNetworkPlugins/group/kubenet/HairPin 0.15
380 TestNetworkPlugins/group/bridge/DNS 0.16
381 TestNetworkPlugins/group/bridge/Localhost 0.13
382 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.16.0/json-events (9.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-021932 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-021932 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (9.570692524s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-021932
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-021932: exit status 85 (68.281814ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-021932 | jenkins | v1.29.0 | 21 Mar 23 21:49 UTC |          |
	|         | -p download-only-021932        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/21 21:49:24
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0321 21:49:24.742221   10544 out.go:296] Setting OutFile to fd 1 ...
	I0321 21:49:24.742839   10544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:49:24.742860   10544 out.go:309] Setting ErrFile to fd 2...
	I0321 21:49:24.742868   10544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:49:24.743089   10544 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-3841/.minikube/bin
	W0321 21:49:24.743313   10544 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16124-3841/.minikube/config/config.json: open /home/jenkins/minikube-integration/16124-3841/.minikube/config/config.json: no such file or directory
	I0321 21:49:24.744239   10544 out.go:303] Setting JSON to true
	I0321 21:49:24.745052   10544 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1917,"bootTime":1679433448,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0321 21:49:24.745116   10544 start.go:135] virtualization: kvm guest
	I0321 21:49:24.748241   10544 out.go:97] [download-only-021932] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	W0321 21:49:24.748362   10544 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball: no such file or directory
	I0321 21:49:24.750239   10544 out.go:169] MINIKUBE_LOCATION=16124
	I0321 21:49:24.748430   10544 notify.go:220] Checking for updates...
	I0321 21:49:24.753787   10544 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0321 21:49:24.755509   10544 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 21:49:24.757137   10544 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-3841/.minikube
	I0321 21:49:24.758813   10544 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0321 21:49:24.761716   10544 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0321 21:49:24.761889   10544 driver.go:365] Setting default libvirt URI to qemu:///system
	I0321 21:49:24.832038   10544 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0321 21:49:24.832120   10544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0321 21:49:24.953450   10544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:41 SystemTime:2023-03-21 21:49:24.943749521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0321 21:49:24.953546   10544 docker.go:294] overlay module found
	I0321 21:49:24.955876   10544 out.go:97] Using the docker driver based on user configuration
	I0321 21:49:24.955900   10544 start.go:295] selected driver: docker
	I0321 21:49:24.955906   10544 start.go:856] validating driver "docker" against <nil>
	I0321 21:49:24.955976   10544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0321 21:49:25.074731   10544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:41 SystemTime:2023-03-21 21:49:25.066177426 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0321 21:49:25.074878   10544 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0321 21:49:25.075514   10544 start_flags.go:386] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I0321 21:49:25.075716   10544 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0321 21:49:25.078057   10544 out.go:169] Using Docker driver with root privileges
	I0321 21:49:25.079603   10544 cni.go:84] Creating CNI manager for ""
	I0321 21:49:25.079627   10544 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0321 21:49:25.079636   10544 start_flags.go:319] config:
	{Name:download-only-021932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-021932 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 21:49:25.081408   10544 out.go:97] Starting control plane node download-only-021932 in cluster download-only-021932
	I0321 21:49:25.081434   10544 cache.go:120] Beginning downloading kic base image for docker with docker
	I0321 21:49:25.082873   10544 out.go:97] Pulling base image ...
	I0321 21:49:25.082897   10544 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0321 21:49:25.082999   10544 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0321 21:49:25.106915   10544 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0321 21:49:25.106944   10544 cache.go:57] Caching tarball of preloaded images
	I0321 21:49:25.107142   10544 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0321 21:49:25.109632   10544 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0321 21:49:25.109657   10544 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0321 21:49:25.145199   10544 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0321 21:49:25.152139   10544 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 to local cache
	I0321 21:49:25.152289   10544 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local cache directory
	I0321 21:49:25.152374   10544 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 to local cache
	I0321 21:49:27.310295   10544 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0321 21:49:27.310392   10544 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0321 21:49:28.062368   10544 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0321 21:49:28.062789   10544 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/download-only-021932/config.json ...
	I0321 21:49:28.062834   10544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/download-only-021932/config.json: {Name:mkd607bb81667b57dd586cba99cd321d9e2c3b9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 21:49:28.062999   10544 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0321 21:49:28.063203   10544 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/16124-3841/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-021932"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/json-events (5.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-021932 --force --alsologtostderr --kubernetes-version=v1.26.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-021932 --force --alsologtostderr --kubernetes-version=v1.26.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.636019411s)
--- PASS: TestDownloadOnly/v1.26.2/json-events (5.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/preload-exists
--- PASS: TestDownloadOnly/v1.26.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-021932
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-021932: exit status 85 (69.117124ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-021932 | jenkins | v1.29.0 | 21 Mar 23 21:49 UTC |          |
	|         | -p download-only-021932        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-021932 | jenkins | v1.29.0 | 21 Mar 23 21:49 UTC |          |
	|         | -p download-only-021932        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/21 21:49:34
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0321 21:49:34.386524   10788 out.go:296] Setting OutFile to fd 1 ...
	I0321 21:49:34.386658   10788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:49:34.386670   10788 out.go:309] Setting ErrFile to fd 2...
	I0321 21:49:34.386677   10788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:49:34.386794   10788 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-3841/.minikube/bin
	W0321 21:49:34.386930   10788 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16124-3841/.minikube/config/config.json: open /home/jenkins/minikube-integration/16124-3841/.minikube/config/config.json: no such file or directory
	I0321 21:49:34.387408   10788 out.go:303] Setting JSON to true
	I0321 21:49:34.388806   10788 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1927,"bootTime":1679433448,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0321 21:49:34.388918   10788 start.go:135] virtualization: kvm guest
	I0321 21:49:34.392467   10788 out.go:97] [download-only-021932] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0321 21:49:34.394427   10788 out.go:169] MINIKUBE_LOCATION=16124
	I0321 21:49:34.392730   10788 notify.go:220] Checking for updates...
	I0321 21:49:34.396466   10788 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0321 21:49:34.398238   10788 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 21:49:34.400057   10788 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-3841/.minikube
	I0321 21:49:34.401832   10788 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0321 21:49:34.404903   10788 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0321 21:49:34.405296   10788 config.go:182] Loaded profile config "download-only-021932": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0321 21:49:34.405339   10788 start.go:764] api.Load failed for download-only-021932: filestore "download-only-021932": Docker machine "download-only-021932" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0321 21:49:34.405388   10788 driver.go:365] Setting default libvirt URI to qemu:///system
	W0321 21:49:34.405416   10788 start.go:764] api.Load failed for download-only-021932: filestore "download-only-021932": Docker machine "download-only-021932" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0321 21:49:34.476863   10788 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0321 21:49:34.476971   10788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0321 21:49:34.599512   10788 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:37 SystemTime:2023-03-21 21:49:34.590806377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0321 21:49:34.599646   10788 docker.go:294] overlay module found
	I0321 21:49:34.601755   10788 out.go:97] Using the docker driver based on existing profile
	I0321 21:49:34.601788   10788 start.go:295] selected driver: docker
	I0321 21:49:34.601795   10788 start.go:856] validating driver "docker" against &{Name:download-only-021932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-021932 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP:}
	I0321 21:49:34.602053   10788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0321 21:49:34.726419   10788 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:37 SystemTime:2023-03-21 21:49:34.717966762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0321 21:49:34.726990   10788 cni.go:84] Creating CNI manager for ""
	I0321 21:49:34.727012   10788 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0321 21:49:34.727020   10788 start_flags.go:319] config:
	{Name:download-only-021932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:download-only-021932 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 21:49:34.729577   10788 out.go:97] Starting control plane node download-only-021932 in cluster download-only-021932
	I0321 21:49:34.729609   10788 cache.go:120] Beginning downloading kic base image for docker with docker
	I0321 21:49:34.731443   10788 out.go:97] Pulling base image ...
	I0321 21:49:34.731470   10788 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0321 21:49:34.731590   10788 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0321 21:49:34.771671   10788 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.2/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0321 21:49:34.771699   10788 cache.go:57] Caching tarball of preloaded images
	I0321 21:49:34.771837   10788 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0321 21:49:34.774223   10788 out.go:97] Downloading Kubernetes v1.26.2 preload ...
	I0321 21:49:34.774253   10788 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 ...
	I0321 21:49:34.796126   10788 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 to local cache
	I0321 21:49:34.796254   10788 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local cache directory
	I0321 21:49:34.796271   10788 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local cache directory, skipping pull
	I0321 21:49:34.796275   10788 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in cache, skipping pull
	I0321 21:49:34.796282   10788 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 as a tarball
	I0321 21:49:34.810707   10788 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.2/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4?checksum=md5:f7b26d32aaabacae8612fb9b9e1a4b89 -> /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
	I0321 21:49:38.419249   10788 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 ...
	I0321 21:49:38.419349   10788 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16124-3841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 ...
	I0321 21:49:39.245493   10788 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.2 on docker
	I0321 21:49:39.245628   10788 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/download-only-021932/config.json ...
	I0321 21:49:39.245818   10788 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
	I0321 21:49:39.246010   10788 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/16124-3841/.minikube/cache/linux/amd64/v1.26.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-021932"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.67s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-021932
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.77s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-770478 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-770478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-770478
--- PASS: TestDownloadOnlyKic (1.77s)

                                                
                                    
x
+
TestBinaryMirror (1.23s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-420917 --alsologtostderr --binary-mirror http://127.0.0.1:42923 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-420917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-420917
--- PASS: TestBinaryMirror (1.23s)

                                                
                                    
x
+
TestOffline (75.76s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-605909 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-605909 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m10.241736558s)
helpers_test.go:175: Cleaning up "offline-docker-605909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-605909
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-605909: (5.516550306s)
--- PASS: TestOffline (75.76s)

                                                
                                    
x
+
TestAddons/Setup (104.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-897151 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-897151 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m44.497808472s)
--- PASS: TestAddons/Setup (104.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 12.044639ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-gw65x" [c82c688e-fc60-441f-8e7f-1550e7757eb0] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009068983s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xmx5b" [f28d5e08-dcb0-48f4-b5a2-5d5f49748e96] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012131128s
addons_test.go:305: (dbg) Run:  kubectl --context addons-897151 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-897151 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-897151 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.029121562s)
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-897151 ip
2023/03/21 21:51:43 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-897151 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.95s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-897151 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-897151 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-897151 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [991cb944-33b1-430b-9097-6321f5fb1a88] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [991cb944-33b1-430b-9097-6321f5fb1a88] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.032903343s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p addons-897151 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-897151 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-897151 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p addons-897151 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p addons-897151 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p addons-897151 addons disable ingress --alsologtostderr -v=1: (7.593774737s)
--- PASS: TestAddons/parallel/Ingress (19.61s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 2.719211ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-6r2np" [178d1192-db7e-412b-8ff8-8e14471e9734] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009457281s
addons_test.go:380: (dbg) Run:  kubectl --context addons-897151 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p addons-897151 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.60s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.41s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 10.089994ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-99cgp" [2083c39e-b37b-43e9-8d13-9c8038bc2a05] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009126906s
addons_test.go:438: (dbg) Run:  kubectl --context addons-897151 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-897151 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.924959368s)
addons_test.go:455: (dbg) Run:  out/minikube-linux-amd64 -p addons-897151 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.41s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.12s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 4.522007ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-897151 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-897151 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6ff93ae7-c683-4e1b-96d7-f30ed8697a18] Pending
helpers_test.go:344: "task-pv-pod" [6ff93ae7-c683-4e1b-96d7-f30ed8697a18] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6ff93ae7-c683-4e1b-96d7-f30ed8697a18] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.00667392s
addons_test.go:549: (dbg) Run:  kubectl --context addons-897151 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-897151 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-897151 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-897151 delete pod task-pv-pod
addons_test.go:559: (dbg) Done: kubectl --context addons-897151 delete pod task-pv-pod: (1.014420603s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-897151 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-897151 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897151 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-897151 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c4772cd7-fba5-45f6-856f-34d769791e85] Pending
helpers_test.go:344: "task-pv-pod-restore" [c4772cd7-fba5-45f6-856f-34d769791e85] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c4772cd7-fba5-45f6-856f-34d769791e85] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.007027955s
addons_test.go:591: (dbg) Run:  kubectl --context addons-897151 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-897151 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-897151 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-897151 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-897151 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.453276338s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-897151 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.12s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-897151 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-897151 --alsologtostderr -v=1: (1.147942221s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58c48fc87f-bltcc" [64aa7d36-1de3-411f-afe5-a39d874e39b1] Pending
helpers_test.go:344: "headlamp-58c48fc87f-bltcc" [64aa7d36-1de3-411f-afe5-a39d874e39b1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58c48fc87f-bltcc" [64aa7d36-1de3-411f-afe5-a39d874e39b1] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.059199429s
--- PASS: TestAddons/parallel/Headlamp (12.21s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-58d646969f-pr7r9" [e782b5bd-0283-4010-ae11-daa64ac44e46] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006779715s
addons_test.go:813: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-897151
--- PASS: TestAddons/parallel/CloudSpanner (5.44s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-897151 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-897151 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.07s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-897151
addons_test.go:147: (dbg) Done: out/minikube-linux-amd64 stop -p addons-897151: (10.837101072s)
addons_test.go:151: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-897151
addons_test.go:155: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-897151
--- PASS: TestAddons/StoppedEnableDisable (11.07s)

                                                
                                    
x
+
TestCertOptions (36.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-271794 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-271794 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (32.159162155s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-271794 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-271794 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-271794 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-271794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-271794
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-271794: (2.844421761s)
--- PASS: TestCertOptions (36.14s)

                                                
                                    
x
+
TestCertExpiration (244.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-365267 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-365267 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (28.609234653s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-365267 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-365267 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (32.392770566s)
helpers_test.go:175: Cleaning up "cert-expiration-365267" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-365267
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-365267: (3.0393139s)
--- PASS: TestCertExpiration (244.04s)

                                                
                                    
x
+
TestDockerFlags (35.76s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-873770 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0321 22:18:43.629898   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-873770 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (31.682474357s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-873770 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-873770 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-873770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-873770
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-873770: (2.908020152s)
--- PASS: TestDockerFlags (35.76s)

                                                
                                    
x
+
TestForceSystemdFlag (33.65s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-097128 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-097128 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (30.177476689s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-097128 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-097128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-097128
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-097128: (2.889216605s)
--- PASS: TestForceSystemdFlag (33.65s)

                                                
                                    
x
+
TestForceSystemdEnv (33.72s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-448529 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-448529 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (29.875149768s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-448529 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-448529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-448529
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-448529: (3.216538975s)
--- PASS: TestForceSystemdEnv (33.72s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.75s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.75s)

                                                
                                    
x
+
TestErrorSpam/setup (27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-867056 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-867056 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-867056 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-867056 --driver=docker  --container-runtime=docker: (26.995217338s)
--- PASS: TestErrorSpam/setup (27.00s)

                                                
                                    
x
+
TestErrorSpam/start (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 start --dry-run
--- PASS: TestErrorSpam/start (1.12s)

                                                
                                    
x
+
TestErrorSpam/status (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 status
--- PASS: TestErrorSpam/status (1.45s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 unpause
--- PASS: TestErrorSpam/unpause (1.59s)

                                                
                                    
x
+
TestErrorSpam/stop (11.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 stop: (10.863334608s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-867056 --log_dir /tmp/nospam-867056 stop
--- PASS: TestErrorSpam/stop (11.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /home/jenkins/minikube-integration/16124-3841/.minikube/files/etc/test/nested/copy/10532/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-linux-amd64 start -p functional-626626 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2229: (dbg) Done: out/minikube-linux-amd64 start -p functional-626626 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (49.808244339s)
--- PASS: TestFunctional/serial/StartWithProxy (49.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.02s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 start -p functional-626626 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-linux-amd64 start -p functional-626626 --alsologtostderr -v=8: (42.019387092s)
functional_test.go:658: soft start took 42.020143115s for "functional-626626" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.02s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-626626 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 cache add k8s.gcr.io/pause:3.1
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 cache add k8s.gcr.io/pause:3.3
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-626626 /tmp/TestFunctionalserialCacheCmdcacheadd_local3963006946/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 cache add minikube-local-cache-test:functional-626626
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 cache delete minikube-local-cache-test:functional-626626
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-626626
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-626626 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (449.494212ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 cache reload
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 kubectl -- --context functional-626626 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-626626 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-amd64 start -p functional-626626 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-linux-amd64 start -p functional-626626 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.503435303s)
functional_test.go:756: restart took 41.503585063s for "functional-626626" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-626626 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-amd64 -p functional-626626 logs: (1.221843802s)
--- PASS: TestFunctional/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 logs --file /tmp/TestFunctionalserialLogsFileCmd2136413739/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-amd64 -p functional-626626 logs --file /tmp/TestFunctionalserialLogsFileCmd2136413739/001/logs.txt: (1.257477121s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-626626 config get cpus: exit status 14 (59.872127ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-626626 config get cpus: exit status 14 (57.091763ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-626626 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-626626 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 72359: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-amd64 start -p functional-626626 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-626626 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (291.09658ms)

                                                
                                                
-- stdout --
	* [functional-626626] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16124-3841/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-3841/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0321 21:56:31.744136   70416 out.go:296] Setting OutFile to fd 1 ...
	I0321 21:56:31.744254   70416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:56:31.744265   70416 out.go:309] Setting ErrFile to fd 2...
	I0321 21:56:31.744272   70416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:56:31.744442   70416 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-3841/.minikube/bin
	I0321 21:56:31.744990   70416 out.go:303] Setting JSON to false
	I0321 21:56:31.746278   70416 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2344,"bootTime":1679433448,"procs":522,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0321 21:56:31.746350   70416 start.go:135] virtualization: kvm guest
	I0321 21:56:31.751641   70416 out.go:177] * [functional-626626] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0321 21:56:31.753320   70416 out.go:177]   - MINIKUBE_LOCATION=16124
	I0321 21:56:31.754712   70416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0321 21:56:31.753252   70416 notify.go:220] Checking for updates...
	I0321 21:56:31.758068   70416 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 21:56:31.760099   70416 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-3841/.minikube
	I0321 21:56:31.761853   70416 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0321 21:56:31.763413   70416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0321 21:56:31.765348   70416 config.go:182] Loaded profile config "functional-626626": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0321 21:56:31.765910   70416 driver.go:365] Setting default libvirt URI to qemu:///system
	I0321 21:56:31.843341   70416 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0321 21:56:31.843436   70416 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0321 21:56:31.973023   70416 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:38 SystemTime:2023-03-21 21:56:31.964276988 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0321 21:56:31.973120   70416 docker.go:294] overlay module found
	I0321 21:56:31.975015   70416 out.go:177] * Using the docker driver based on existing profile
	I0321 21:56:31.976332   70416 start.go:295] selected driver: docker
	I0321 21:56:31.976344   70416 start.go:856] validating driver "docker" against &{Name:functional-626626 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:functional-626626 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 21:56:31.976444   70416 start.go:867] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0321 21:56:31.978515   70416 out.go:177] 
	W0321 21:56:31.979890   70416 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0321 21:56:31.981163   70416 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-amd64 start -p functional-626626 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-626626 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-626626 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (272.256018ms)

                                                
                                                
-- stdout --
	* [functional-626626] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16124-3841/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-3841/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0321 21:56:32.418747   71092 out.go:296] Setting OutFile to fd 1 ...
	I0321 21:56:32.419086   71092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:56:32.419102   71092 out.go:309] Setting ErrFile to fd 2...
	I0321 21:56:32.419109   71092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:56:32.419503   71092 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-3841/.minikube/bin
	I0321 21:56:32.420644   71092 out.go:303] Setting JSON to false
	I0321 21:56:32.421897   71092 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2345,"bootTime":1679433448,"procs":520,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0321 21:56:32.421957   71092 start.go:135] virtualization: kvm guest
	I0321 21:56:32.423882   71092 out.go:177] * [functional-626626] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	I0321 21:56:32.425654   71092 out.go:177]   - MINIKUBE_LOCATION=16124
	I0321 21:56:32.425618   71092 notify.go:220] Checking for updates...
	I0321 21:56:32.427047   71092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0321 21:56:32.428550   71092 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16124-3841/kubeconfig
	I0321 21:56:32.430095   71092 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-3841/.minikube
	I0321 21:56:32.431608   71092 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0321 21:56:32.432963   71092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0321 21:56:32.434672   71092 config.go:182] Loaded profile config "functional-626626": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0321 21:56:32.435041   71092 driver.go:365] Setting default libvirt URI to qemu:///system
	I0321 21:56:32.504621   71092 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0321 21:56:32.504715   71092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0321 21:56:32.631289   71092 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:38 SystemTime:2023-03-21 21:56:32.621139718 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0321 21:56:32.631402   71092 docker.go:294] overlay module found
	I0321 21:56:32.634355   71092 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0321 21:56:32.635958   71092 start.go:295] selected driver: docker
	I0321 21:56:32.635979   71092 start.go:856] validating driver "docker" against &{Name:functional-626626 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:functional-626626 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 21:56:32.636113   71092 start.go:867] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0321 21:56:32.638582   71092 out.go:177] 
	W0321 21:56:32.640077   71092 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0321 21:56:32.641400   71092 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 status
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
E0321 21:56:28.966940   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
E0321 21:56:28.972587   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
E0321 21:56:28.982797   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
E0321 21:56:29.003076   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
E0321 21:56:29.043337   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
E0321 21:56:29.123638   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
E0321 21:56:29.284117   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
functional_test.go:867: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 status -o json
E0321 21:56:29.604255   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/StatusCmd (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-626626 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-626626 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-fx5td" [c0409615-f889-4568-8d52-b66699eaeff9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-fx5td" [c0409615-f889-4568-8d52-b66699eaeff9] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.007082273s
functional_test.go:1647: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 service hello-node-connect --url
E0321 21:56:34.087194   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.49.2:31129
functional_test.go:1673: http://192.168.49.2:31129: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-fx5td

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31129
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.96s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8fabfa02-cbaa-4b3d-92b3-543fad218332] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007837591s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-626626 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-626626 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-626626 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-626626 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e0b79d1d-ed13-4d9f-8335-136237378046] Pending
helpers_test.go:344: "sp-pod" [e0b79d1d-ed13-4d9f-8335-136237378046] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e0b79d1d-ed13-4d9f-8335-136237378046] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.007513768s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-626626 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-626626 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-626626 delete -f testdata/storage-provisioner/pod.yaml: (1.490946362s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-626626 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [49de8849-1df4-488a-b49d-53d22b8a9cb5] Pending
helpers_test.go:344: "sp-pod" [49de8849-1df4-488a-b49d-53d22b8a9cb5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0321 21:56:39.208001   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [49de8849-1df4-488a-b49d-53d22b8a9cb5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.061091139s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-626626 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.55s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh -n functional-626626 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 cp functional-626626:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4026849298/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh -n functional-626626 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1788: (dbg) Run:  kubectl --context functional-626626 replace --force -f testdata/mysql.yaml
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-dcs2d" [4154d229-2550-40fc-bef1-00a0d068749c] Pending
helpers_test.go:344: "mysql-888f84dd9-dcs2d" [4154d229-2550-40fc-bef1-00a0d068749c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-dcs2d" [4154d229-2550-40fc-bef1-00a0d068749c] Running
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.042388852s
functional_test.go:1802: (dbg) Run:  kubectl --context functional-626626 exec mysql-888f84dd9-dcs2d -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-626626 exec mysql-888f84dd9-dcs2d -- mysql -ppassword -e "show databases;": exit status 1 (121.373131ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-626626 exec mysql-888f84dd9-dcs2d -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-626626 exec mysql-888f84dd9-dcs2d -- mysql -ppassword -e "show databases;": exit status 1 (124.618579ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-626626 exec mysql-888f84dd9-dcs2d -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-626626 exec mysql-888f84dd9-dcs2d -- mysql -ppassword -e "show databases;": exit status 1 (180.772554ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-626626 exec mysql-888f84dd9-dcs2d -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.25s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/10532/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "sudo cat /etc/test/nested/copy/10532/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/10532.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "sudo cat /etc/ssl/certs/10532.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/10532.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "sudo cat /usr/share/ca-certificates/10532.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/105322.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "sudo cat /etc/ssl/certs/105322.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/105322.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "sudo cat /usr/share/ca-certificates/105322.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-626626 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-626626 ssh "sudo systemctl is-active crio": exit status 1 (583.767731ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 version -o=json --components
functional_test.go:2265: (dbg) Done: out/minikube-linux-amd64 -p functional-626626 version -o=json --components: (1.072352079s)
--- PASS: TestFunctional/parallel/Version/components (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image ls --format short
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-626626 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-626626
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-626626
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image ls --format table
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-626626 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-626626 | ac5e72cda3baf | 30B    |
| registry.k8s.io/kube-scheduler              | v1.26.2           | db8f409d9a5d7 | 56.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/mysql                     | 5.7               | b6ee2207ee7a9 | 455MB  |
| registry.k8s.io/kube-apiserver              | v1.26.2           | 63d3239c3c159 | 134MB  |
| docker.io/library/nginx                     | alpine            | 2bc7edbc3cf2f | 40.7MB |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 904b8cb13b932 | 142MB  |
| registry.k8s.io/kube-controller-manager     | v1.26.2           | 240e201d5b0d8 | 123MB  |
| registry.k8s.io/kube-proxy                  | v1.26.2           | 6f64e7135a6ec | 65.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/google-containers/addon-resizer      | functional-626626 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
E0321 21:56:49.448648   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image ls --format json
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-626626 image ls --format json:
[{"id":"b6ee2207ee7a9ed4f5c718a507fd00dace311300153b99f6830ce34741f2f093","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"240e201d5b0d8c6ae66764165080c22834e3a9fed050cf5780211d973644ac1e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.2"],"size":"123000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"ac5e72cda3bafdc30bf366454f2504b796
9a46405403f0ee4c6965377ba27a30","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-626626"],"size":"30"},{"id":"904b8cb13b932e23230836850610fa45dce9eb0650d5618c2b1487c2a4f577b8","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"63d3239c3c159b1db368f8cf0d597bef7bd4c82e15cd1b99a93fc7b50f255901","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.2"],"size":"134000000"},{"id":"6f64e7135a6ec1adfb0c12e1864b0e8392facac43717a2c6911550740ab3992d","repoDi
gests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.2"],"size":"65599999"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"db8f409d9a5d7c775876eb5e4e0c69089eff801fefbd8a356621a7b0f640f58c","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.2"],"size":"56300000"},{"id":"2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"e6f1816883
972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-626626"],"size":"32900000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image ls --format yaml
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-626626 image ls --format yaml:
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ac5e72cda3bafdc30bf366454f2504b7969a46405403f0ee4c6965377ba27a30
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-626626
size: "30"
- id: 904b8cb13b932e23230836850610fa45dce9eb0650d5618c2b1487c2a4f577b8
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 240e201d5b0d8c6ae66764165080c22834e3a9fed050cf5780211d973644ac1e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.2
size: "123000000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-626626
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 63d3239c3c159b1db368f8cf0d597bef7bd4c82e15cd1b99a93fc7b50f255901
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.2
size: "134000000"
- id: db8f409d9a5d7c775876eb5e4e0c69089eff801fefbd8a356621a7b0f640f58c
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.2
size: "56300000"
- id: 6f64e7135a6ec1adfb0c12e1864b0e8392facac43717a2c6911550740ab3992d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.2
size: "65599999"
- id: 2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-626626 ssh pgrep buildkitd: exit status 1 (523.15446ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image build -t localhost/my-image:functional-626626 testdata/build
functional_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p functional-626626 image build -t localhost/my-image:functional-626626 testdata/build: (3.12154595s)
functional_test.go:318: (dbg) Stdout: out/minikube-linux-amd64 -p functional-626626 image build -t localhost/my-image:functional-626626 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in b903dd514050
Removing intermediate container b903dd514050
---> cb12f67f446d
Step 3/3 : ADD content.txt /
---> 7372dcf6a270
Successfully built 7372dcf6a270
Successfully tagged localhost/my-image:functional-626626
functional_test.go:321: (dbg) Stderr: out/minikube-linux-amd64 -p functional-626626 image build -t localhost/my-image:functional-626626 testdata/build:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.127500093s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-626626
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-626626 docker-env) && out/minikube-linux-amd64 status -p functional-626626"
functional_test.go:494: (dbg) Done: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-626626 docker-env) && out/minikube-linux-amd64 status -p functional-626626": (1.432200059s)
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-626626 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image load --daemon gcr.io/google-containers/addon-resizer:functional-626626
functional_test.go:353: (dbg) Done: out/minikube-linux-amd64 -p functional-626626 image load --daemon gcr.io/google-containers/addon-resizer:functional-626626: (4.746771266s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-626626 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-626626 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-mchks" [ccbcd2fc-f842-430e-bba0-faeaffa045af] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6fddd6858d-mchks" [ccbcd2fc-f842-430e-bba0-faeaffa045af] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.012878598s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-626626 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-626626 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c255000d-2f81-46ad-83a9-d18cd100335d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c255000d-2f81-46ad-83a9-d18cd100335d] Running
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.039537091s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image load --daemon gcr.io/google-containers/addon-resizer:functional-626626
functional_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p functional-626626 image load --daemon gcr.io/google-containers/addon-resizer:functional-626626: (2.455940866s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-626626
functional_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image load --daemon gcr.io/google-containers/addon-resizer:functional-626626
functional_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p functional-626626 image load --daemon gcr.io/google-containers/addon-resizer:functional-626626: (3.772227664s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 service list -o json
functional_test.go:1492: Took "557.359074ms" to run "out/minikube-linux-amd64 -p functional-626626 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image save gcr.io/google-containers/addon-resizer:functional-626626 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-626626 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.100.161.175 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-626626 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.49.2:31679
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image rm gcr.io/google-containers/addon-resizer:functional-626626
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.49.2:31679
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-626626
functional_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 image save --daemon gcr.io/google-containers/addon-resizer:functional-626626
functional_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p functional-626626 image save --daemon gcr.io/google-containers/addon-resizer:functional-626626: (2.520649804s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-626626
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0321 21:56:30.245190   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1313: Took "502.886104ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1327: Took "49.851223ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1364: Took "506.25135ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1377: Took "53.691026ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-626626 /tmp/TestFunctionalparallelMountCmdany-port2546786596/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1679435791486840488" to /tmp/TestFunctionalparallelMountCmdany-port2546786596/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1679435791486840488" to /tmp/TestFunctionalparallelMountCmdany-port2546786596/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1679435791486840488" to /tmp/TestFunctionalparallelMountCmdany-port2546786596/001/test-1679435791486840488
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "findmnt -T /mount-9p | grep 9p"
E0321 21:56:31.526372   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-626626 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (529.155591ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 21 21:56 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 21 21:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 21 21:56 test-1679435791486840488
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh cat /mount-9p/test-1679435791486840488
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-626626 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8656984d-5220-4e95-a5bb-512f94d74fbe] Pending
helpers_test.go:344: "busybox-mount" [8656984d-5220-4e95-a5bb-512f94d74fbe] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8656984d-5220-4e95-a5bb-512f94d74fbe] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
2023/03/21 21:56:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "busybox-mount" [8656984d-5220-4e95-a5bb-512f94d74fbe] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.038946415s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-626626 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-626626 /tmp/TestFunctionalparallelMountCmdany-port2546786596/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (14.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (3.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-626626 /tmp/TestFunctionalparallelMountCmdspecific-port763129621/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-626626 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (671.223168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-626626 /tmp/TestFunctionalparallelMountCmdspecific-port763129621/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p functional-626626 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-626626 ssh "sudo umount -f /mount-9p": exit status 1 (484.921074ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-linux-amd64 -p functional-626626 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-626626 /tmp/TestFunctionalparallelMountCmdspecific-port763129621/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (3.14s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-626626
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-626626
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-626626
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (0.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-240032
--- PASS: TestImageBuild/serial/NormalBuild (0.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-240032
image_test.go:94: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-240032: (1.030449549s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.44s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-240032
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.44s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-240032
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.37s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (56.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-024295 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0321 21:57:50.889450   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-024295 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (56.220608002s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (56.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.16s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-024295 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-024295 addons enable ingress --alsologtostderr -v=5: (10.161362514s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.16s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-024295 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (30.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-024295 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-024295 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.262757883s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-024295 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-024295 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c6df59ff-42aa-4476-aebb-63038e2d73f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c6df59ff-42aa-4476-aebb-63038e2d73f5] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.005499877s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-024295 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-024295 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-024295 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-024295 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-024295 addons disable ingress-dns --alsologtostderr -v=1: (4.278827033s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-024295 addons disable ingress --alsologtostderr -v=1
E0321 21:59:12.812465   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-024295 addons disable ingress --alsologtostderr -v=1: (7.320224804s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (30.23s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-467929 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-467929 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (43.243800552s)
--- PASS: TestJSONOutput/start/Command (43.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-467929 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-467929 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-467929 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-467929 --output=json --user=testUser: (5.879088519s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.44s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-720365 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-720365 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.512155ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5679b6be-77fa-4e91-a46d-6abdaaf88db9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-720365] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a6c0be0-8663-48cc-9b80-b148b7bc1e4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16124"}}
	{"specversion":"1.0","id":"6d32bc24-7f03-47d4-a107-1f4274a4b808","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"61ee40cb-c669-450e-bf04-7939a40df7ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16124-3841/kubeconfig"}}
	{"specversion":"1.0","id":"daa6c122-ae72-4437-967e-abfeb75bae81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-3841/.minikube"}}
	{"specversion":"1.0","id":"629b101b-91be-407e-818d-17b987063e93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1f550b86-4238-43aa-ba39-ef5684e30f06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"14a89301-7611-41b3-a271-9e0298210a19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-720365" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-720365
--- PASS: TestErrorJSONOutput (0.44s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-558728 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-558728 --network=: (26.614360008s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-558728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-558728
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-558728: (2.609756624s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.29s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (29.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-083195 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-083195 --network=bridge: (26.628920462s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-083195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-083195
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-083195: (2.384880224s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (29.08s)

                                                
                                    
x
+
TestKicExistingNetwork (29.31s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-647479 --network=existing-network
E0321 22:01:14.300186   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:01:14.332384   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:01:14.342708   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:01:14.362934   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:01:14.403223   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:01:14.483552   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:01:14.643930   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:01:14.964637   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:01:15.605524   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:01:16.885957   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:01:19.446889   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:01:24.567483   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:01:28.968649   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
E0321 22:01:34.808005   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-647479 --network=existing-network: (26.458722608s)
helpers_test.go:175: Cleaning up "existing-network-647479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-647479
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-647479: (2.431091643s)
--- PASS: TestKicExistingNetwork (29.31s)

                                                
                                    
x
+
TestKicCustomSubnet (28.16s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-846788 --subnet=192.168.60.0/24
E0321 22:01:55.288439   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:01:56.653892   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-846788 --subnet=192.168.60.0/24: (25.404496433s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-846788 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-846788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-846788
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-846788: (2.687392804s)
--- PASS: TestKicCustomSubnet (28.16s)

                                                
                                    
x
+
TestKicStaticIP (28.54s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-949800 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-949800 --static-ip=192.168.200.200: (25.746845621s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-949800 ip
helpers_test.go:175: Cleaning up "static-ip-949800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-949800
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-949800: (2.558062867s)
--- PASS: TestKicStaticIP (28.54s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (59.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-664009 --driver=docker  --container-runtime=docker
E0321 22:02:36.249962   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-664009 --driver=docker  --container-runtime=docker: (26.413367239s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-666791 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-666791 --driver=docker  --container-runtime=docker: (25.88432929s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-664009
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-666791
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-666791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-666791
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-666791: (2.684604571s)
helpers_test.go:175: Cleaning up "first-664009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-664009
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-664009: (2.686304745s)
--- PASS: TestMinikubeProfile (59.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-353660 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-353660 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.266261839s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-353660 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-370454 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0321 22:03:43.630196   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:03:43.635423   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:03:43.645707   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:03:43.665973   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:03:43.706296   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:03:43.786593   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:03:43.947220   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:03:44.267747   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:03:44.908034   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:03:46.188491   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-370454 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.08078175s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-370454 ssh -- ls /minikube-host
E0321 22:03:48.748988   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.07s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-353660 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-353660 --alsologtostderr -v=5: (2.071225899s)
--- PASS: TestMountStart/serial/DeleteFirst (2.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-370454 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-370454
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-370454: (1.377581011s)
--- PASS: TestMountStart/serial/Stop (1.38s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.9s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-370454
E0321 22:03:53.869974   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:03:58.170890   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-370454: (6.903375816s)
--- PASS: TestMountStart/serial/RestartStopped (7.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-370454 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (71.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-860915 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0321 22:04:04.110491   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:04:24.591080   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:05:05.551846   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-860915 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m10.760847396s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (71.56s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-860915 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-860915 -v 3 --alsologtostderr: (16.508501178s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-linux-amd64 -p multinode-860915 status --alsologtostderr: (1.070848619s)
--- PASS: TestMultiNode/serial/AddNode (17.58s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (15.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-linux-amd64 -p multinode-860915 status --output json --alsologtostderr: (1.035132195s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 cp testdata/cp-test.txt multinode-860915:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 cp multinode-860915:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile964884761/001/cp-test_multinode-860915.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 cp multinode-860915:/home/docker/cp-test.txt multinode-860915-m02:/home/docker/cp-test_multinode-860915_multinode-860915-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915-m02 "sudo cat /home/docker/cp-test_multinode-860915_multinode-860915-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 cp multinode-860915:/home/docker/cp-test.txt multinode-860915-m03:/home/docker/cp-test_multinode-860915_multinode-860915-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915-m03 "sudo cat /home/docker/cp-test_multinode-860915_multinode-860915-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 cp testdata/cp-test.txt multinode-860915-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 cp multinode-860915-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile964884761/001/cp-test_multinode-860915-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 cp multinode-860915-m02:/home/docker/cp-test.txt multinode-860915:/home/docker/cp-test_multinode-860915-m02_multinode-860915.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915 "sudo cat /home/docker/cp-test_multinode-860915-m02_multinode-860915.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 cp multinode-860915-m02:/home/docker/cp-test.txt multinode-860915-m03:/home/docker/cp-test_multinode-860915-m02_multinode-860915-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915-m03 "sudo cat /home/docker/cp-test_multinode-860915-m02_multinode-860915-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 cp testdata/cp-test.txt multinode-860915-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 cp multinode-860915-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile964884761/001/cp-test_multinode-860915-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 cp multinode-860915-m03:/home/docker/cp-test.txt multinode-860915:/home/docker/cp-test_multinode-860915-m03_multinode-860915.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915 "sudo cat /home/docker/cp-test_multinode-860915-m03_multinode-860915.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 cp multinode-860915-m03:/home/docker/cp-test.txt multinode-860915-m02:/home/docker/cp-test_multinode-860915-m03_multinode-860915-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 ssh -n multinode-860915-m02 "sudo cat /home/docker/cp-test_multinode-860915-m03_multinode-860915-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (15.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-860915 node stop m03: (1.386987096s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-860915 status: exit status 7 (822.283582ms)

                                                
                                                
-- stdout --
	multinode-860915
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-860915-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-860915-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-860915 status --alsologtostderr: exit status 7 (824.606383ms)

                                                
                                                
-- stdout --
	multinode-860915
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-860915-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-860915-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0321 22:06:00.304017  178943 out.go:296] Setting OutFile to fd 1 ...
	I0321 22:06:00.304123  178943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:06:00.304131  178943 out.go:309] Setting ErrFile to fd 2...
	I0321 22:06:00.304135  178943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:06:00.304243  178943 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-3841/.minikube/bin
	I0321 22:06:00.304393  178943 out.go:303] Setting JSON to false
	I0321 22:06:00.304426  178943 mustload.go:65] Loading cluster: multinode-860915
	I0321 22:06:00.304534  178943 notify.go:220] Checking for updates...
	I0321 22:06:00.304735  178943 config.go:182] Loaded profile config "multinode-860915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0321 22:06:00.304747  178943 status.go:255] checking status of multinode-860915 ...
	I0321 22:06:00.305108  178943 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Status}}
	I0321 22:06:00.370038  178943 status.go:330] multinode-860915 host status = "Running" (err=<nil>)
	I0321 22:06:00.370062  178943 host.go:66] Checking if "multinode-860915" exists ...
	I0321 22:06:00.370286  178943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860915
	I0321 22:06:00.433174  178943 host.go:66] Checking if "multinode-860915" exists ...
	I0321 22:06:00.433423  178943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0321 22:06:00.433456  178943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915
	I0321 22:06:00.498839  178943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915/id_rsa Username:docker}
	I0321 22:06:00.582275  178943 ssh_runner.go:195] Run: systemctl --version
	I0321 22:06:00.585597  178943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0321 22:06:00.594061  178943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0321 22:06:00.709082  178943 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:42 SystemTime:2023-03-21 22:06:00.700733077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1030-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0321 22:06:00.709590  178943 kubeconfig.go:92] found "multinode-860915" server: "https://192.168.58.2:8443"
	I0321 22:06:00.709616  178943 api_server.go:165] Checking apiserver status ...
	I0321 22:06:00.709659  178943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:06:00.718600  178943 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2095/cgroup
	I0321 22:06:00.725343  178943 api_server.go:181] apiserver freezer: "9:freezer:/docker/cea2236b98324a951f72d1a79eb9f22278797303dadcf11048b54d9c63675ddc/kubepods/burstable/pod322dc81533eb2822b571df496b71ca36/ee6e07b4a24f59614832d2546c935fa89faed4cdf9ed7cf6e058852cd7a9bccd"
	I0321 22:06:00.725391  178943 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cea2236b98324a951f72d1a79eb9f22278797303dadcf11048b54d9c63675ddc/kubepods/burstable/pod322dc81533eb2822b571df496b71ca36/ee6e07b4a24f59614832d2546c935fa89faed4cdf9ed7cf6e058852cd7a9bccd/freezer.state
	I0321 22:06:00.731250  178943 api_server.go:203] freezer state: "THAWED"
	I0321 22:06:00.731286  178943 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0321 22:06:00.735283  178943 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0321 22:06:00.735300  178943 status.go:421] multinode-860915 apiserver status = Running (err=<nil>)
	I0321 22:06:00.735308  178943 status.go:257] multinode-860915 status: &{Name:multinode-860915 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0321 22:06:00.735321  178943 status.go:255] checking status of multinode-860915-m02 ...
	I0321 22:06:00.735543  178943 cli_runner.go:164] Run: docker container inspect multinode-860915-m02 --format={{.State.Status}}
	I0321 22:06:00.801676  178943 status.go:330] multinode-860915-m02 host status = "Running" (err=<nil>)
	I0321 22:06:00.801699  178943 host.go:66] Checking if "multinode-860915-m02" exists ...
	I0321 22:06:00.801926  178943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860915-m02
	I0321 22:06:00.865238  178943 host.go:66] Checking if "multinode-860915-m02" exists ...
	I0321 22:06:00.865480  178943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0321 22:06:00.865517  178943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860915-m02
	I0321 22:06:00.930511  178943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/16124-3841/.minikube/machines/multinode-860915-m02/id_rsa Username:docker}
	I0321 22:06:01.009943  178943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0321 22:06:01.018641  178943 status.go:257] multinode-860915-m02 status: &{Name:multinode-860915-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0321 22:06:01.018675  178943 status.go:255] checking status of multinode-860915-m03 ...
	I0321 22:06:01.018898  178943 cli_runner.go:164] Run: docker container inspect multinode-860915-m03 --format={{.State.Status}}
	I0321 22:06:01.082785  178943 status.go:330] multinode-860915-m03 host status = "Stopped" (err=<nil>)
	I0321 22:06:01.082808  178943 status.go:343] host is not running, skipping remaining checks
	I0321 22:06:01.082819  178943 status.go:257] multinode-860915-m03 status: &{Name:multinode-860915-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.03s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-860915 node start m03 --alsologtostderr: (11.29201373s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 status
multinode_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p multinode-860915 status: (1.04846075s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (118.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-860915
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-860915
E0321 22:06:14.299525   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:06:27.474140   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:06:28.966782   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-860915: (22.916004612s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-860915 --wait=true -v=8 --alsologtostderr
E0321 22:06:42.011048   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-860915 --wait=true -v=8 --alsologtostderr: (1m35.082982098s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-860915
--- PASS: TestMultiNode/serial/RestartKeepsNodes (118.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-860915 node delete m03: (5.138380108s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-860915 stop: (21.748457762s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-860915 status: exit status 7 (170.665672ms)

                                                
                                                
-- stdout --
	multinode-860915
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-860915-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-860915 status --alsologtostderr: exit status 7 (170.795409ms)

                                                
                                                
-- stdout --
	multinode-860915
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-860915-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0321 22:08:39.677135  201029 out.go:296] Setting OutFile to fd 1 ...
	I0321 22:08:39.677387  201029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:08:39.677401  201029 out.go:309] Setting ErrFile to fd 2...
	I0321 22:08:39.677409  201029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:08:39.677681  201029 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-3841/.minikube/bin
	I0321 22:08:39.677977  201029 out.go:303] Setting JSON to false
	I0321 22:08:39.678041  201029 mustload.go:65] Loading cluster: multinode-860915
	I0321 22:08:39.678514  201029 notify.go:220] Checking for updates...
	I0321 22:08:39.679113  201029 config.go:182] Loaded profile config "multinode-860915": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
	I0321 22:08:39.679130  201029 status.go:255] checking status of multinode-860915 ...
	I0321 22:08:39.679516  201029 cli_runner.go:164] Run: docker container inspect multinode-860915 --format={{.State.Status}}
	I0321 22:08:39.743593  201029 status.go:330] multinode-860915 host status = "Stopped" (err=<nil>)
	I0321 22:08:39.743611  201029 status.go:343] host is not running, skipping remaining checks
	I0321 22:08:39.743617  201029 status.go:257] multinode-860915 status: &{Name:multinode-860915 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0321 22:08:39.743635  201029 status.go:255] checking status of multinode-860915-m02 ...
	I0321 22:08:39.743844  201029 cli_runner.go:164] Run: docker container inspect multinode-860915-m02 --format={{.State.Status}}
	I0321 22:08:39.805164  201029 status.go:330] multinode-860915-m02 host status = "Stopped" (err=<nil>)
	I0321 22:08:39.805184  201029 status.go:343] host is not running, skipping remaining checks
	I0321 22:08:39.805192  201029 status.go:257] multinode-860915-m02 status: &{Name:multinode-860915-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-860915 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0321 22:08:43.631332   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:09:11.315097   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-860915 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (58.266306735s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-860915 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (59.23s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (29.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-860915
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-860915-m02 --driver=docker  --container-runtime=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-860915-m02 --driver=docker  --container-runtime=docker: exit status 14 (76.501931ms)

                                                
                                                
-- stdout --
	* [multinode-860915-m02] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16124-3841/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-3841/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-860915-m02' is duplicated with machine name 'multinode-860915-m02' in profile 'multinode-860915'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-860915-m03 --driver=docker  --container-runtime=docker
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-860915-m03 --driver=docker  --container-runtime=docker: (26.271823938s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-860915
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-860915: exit status 80 (406.724449ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-860915
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-860915-m03 already exists in multinode-860915-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-860915-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-860915-m03: (2.596553651s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (29.40s)

                                                
                                    
x
+
TestPreload (149.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-729546 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0321 22:11:14.300350   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:11:28.966492   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-729546 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m30.616752477s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-729546 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-729546
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-729546: (10.88284087s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-729546 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-729546 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (43.272784567s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-729546 -- docker images
helpers_test.go:175: Cleaning up "test-preload-729546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-729546
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-729546: (2.852207542s)
--- PASS: TestPreload (149.11s)

                                                
                                    
x
+
TestScheduledStopUnix (102.39s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-975296 --memory=2048 --driver=docker  --container-runtime=docker
E0321 22:12:52.014197   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-975296 --memory=2048 --driver=docker  --container-runtime=docker: (28.009457177s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-975296 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-975296 -n scheduled-stop-975296
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-975296 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-975296 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-975296 -n scheduled-stop-975296
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-975296
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-975296 --schedule 15s
E0321 22:13:43.632929   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-975296
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-975296: exit status 7 (125.752768ms)

                                                
                                                
-- stdout --
	scheduled-stop-975296
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-975296 -n scheduled-stop-975296
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-975296 -n scheduled-stop-975296: exit status 7 (126.652959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-975296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-975296
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-975296: (2.390375844s)
--- PASS: TestScheduledStopUnix (102.39s)

                                                
                                    
x
+
TestSkaffold (60.05s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1997954161 version
skaffold_test.go:63: skaffold version: v2.2.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-516596 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-516596 --memory=2600 --driver=docker  --container-runtime=docker: (27.050012524s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1997954161 run --minikube-profile skaffold-516596 --kube-context skaffold-516596 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1997954161 run --minikube-profile skaffold-516596 --kube-context skaffold-516596 --status-check=true --port-forward=false --interactive=false: (18.885272863s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-9576564cd-znqqr" [c42ca9cb-ab27-4015-aeb3-14b6fc633ff7] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.013244713s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6444fd5ff-fwdws" [b00c018a-9b7c-4555-bb97-0c3153c6c2e2] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005702022s
helpers_test.go:175: Cleaning up "skaffold-516596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-516596
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-516596: (2.894593508s)
--- PASS: TestSkaffold (60.05s)

                                                
                                    
x
+
TestInsufficientStorage (12.78s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-625311 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-625311 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.625504073s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ec76012e-0bb5-48cf-9a33-d9c31415eca7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-625311] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"64d1d1c8-1973-4a12-98c9-efd9d4ed9d0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16124"}}
	{"specversion":"1.0","id":"a3ae0b83-679a-49d1-8beb-4ce04faf7fcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fd271573-bbfe-46e8-b9d2-f436964030eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16124-3841/kubeconfig"}}
	{"specversion":"1.0","id":"b1cc64ae-c450-4634-9364-90c7fa102355","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-3841/.minikube"}}
	{"specversion":"1.0","id":"5a4c1c6a-6832-4f73-9b8d-6fd93e90cd93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5078a804-6fe5-490e-8d9f-0f7846594068","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d6c13415-2649-4984-ae82-0b302792b28d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"84c594bb-b80e-492a-9909-1ee8527a4f98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a7618299-96d2-4385-b29f-409825e2aa6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"49a768d2-3756-4c19-ac5f-e1f57d636a59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"efc93361-c58c-4f96-9f9c-d79506a3dfa8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-625311 in cluster insufficient-storage-625311","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e30c2b4f-d913-4c8d-aac5-45eea9a8736f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a81b4d14-4ae6-449f-992d-1c0200e78ec5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d5f906b-d9bf-4011-b0c7-1c22630eba5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-625311 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-625311 --output=json --layout=cluster: exit status 7 (452.681365ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-625311","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-625311","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0321 22:15:37.163783  249780 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-625311" does not appear in /home/jenkins/minikube-integration/16124-3841/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-625311 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-625311 --output=json --layout=cluster: exit status 7 (443.395264ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-625311","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-625311","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0321 22:15:37.607752  249974 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-625311" does not appear in /home/jenkins/minikube-integration/16124-3841/kubeconfig
	E0321 22:15:37.615589  249974 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/insufficient-storage-625311/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-625311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-625311
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-625311: (2.254262475s)
--- PASS: TestInsufficientStorage (12.78s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (78.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.9.0.137215087.exe start -p running-upgrade-399488 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.9.0.137215087.exe start -p running-upgrade-399488 --memory=2200 --vm-driver=docker  --container-runtime=docker: (55.724714355s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-399488 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-399488 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (20.447242845s)
helpers_test.go:175: Cleaning up "running-upgrade-399488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-399488
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-399488: (2.367527075s)
--- PASS: TestRunningBinaryUpgrade (78.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (379.03s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-708049 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-708049 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (59.849333845s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-708049
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-708049: (1.51839995s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-708049 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-708049 status --format={{.Host}}: exit status 7 (134.797911ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-708049 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:251: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-708049 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m34.819443222s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-708049 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-708049 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-708049 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (71.210082ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-708049] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16124-3841/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-3841/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-708049
	    minikube start -p kubernetes-upgrade-708049 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7080492 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.2, by running:
	    
	    minikube start -p kubernetes-upgrade-708049 --kubernetes-version=v1.26.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-708049 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:283: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-708049 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.881621768s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-708049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-708049
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-708049: (2.696360007s)
--- PASS: TestKubernetesUpgrade (379.03s)

                                                
                                    
x
+
TestMissingContainerUpgrade (107.25s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /tmp/minikube-v1.9.1.3831085147.exe start -p missing-upgrade-499795 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:317: (dbg) Done: /tmp/minikube-v1.9.1.3831085147.exe start -p missing-upgrade-499795 --memory=2200 --driver=docker  --container-runtime=docker: (59.360413144s)
version_upgrade_test.go:326: (dbg) Run:  docker stop missing-upgrade-499795
version_upgrade_test.go:326: (dbg) Done: docker stop missing-upgrade-499795: (1.839333295s)
version_upgrade_test.go:331: (dbg) Run:  docker rm missing-upgrade-499795
version_upgrade_test.go:337: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-499795 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:337: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-499795 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.246733803s)
helpers_test.go:175: Cleaning up "missing-upgrade-499795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-499795
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-499795: (3.223049356s)
--- PASS: TestMissingContainerUpgrade (107.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (74.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /tmp/minikube-v1.9.0.1923009094.exe start -p stopped-upgrade-800138 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0321 22:16:14.299721   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:16:28.966325   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Done: /tmp/minikube-v1.9.0.1923009094.exe start -p stopped-upgrade-800138 --memory=2200 --vm-driver=docker  --container-runtime=docker: (49.712814406s)
version_upgrade_test.go:200: (dbg) Run:  /tmp/minikube-v1.9.0.1923009094.exe -p stopped-upgrade-800138 stop
version_upgrade_test.go:200: (dbg) Done: /tmp/minikube-v1.9.0.1923009094.exe -p stopped-upgrade-800138 stop: (2.656851943s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-800138 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-800138 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (22.605143237s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (74.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-800138
version_upgrade_test.go:214: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-800138: (1.603980505s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.60s)

                                                
                                    
x
+
TestPause/serial/Start (47.72s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-955052 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-955052 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (47.717582856s)
--- PASS: TestPause/serial/Start (47.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-790225 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-790225 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (74.14675ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-790225] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16124-3841/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-3841/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (31.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-790225 --driver=docker  --container-runtime=docker
E0321 22:17:37.376861   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-790225 --driver=docker  --container-runtime=docker: (31.355353571s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-790225 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (31.86s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-955052 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-955052 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.813244877s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-790225 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-790225 --no-kubernetes --driver=docker  --container-runtime=docker: (4.213958086s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-790225 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-790225 status -o json: exit status 2 (543.87517ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-790225","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-790225
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-790225: (2.554910148s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-790225 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-790225 --no-kubernetes --driver=docker  --container-runtime=docker: (7.677683502s)
--- PASS: TestNoKubernetes/serial/Start (7.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-790225 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-790225 "sudo systemctl is-active --quiet service kubelet": exit status 1 (472.160706ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.473120393s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.32s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-955052 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-955052 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-955052 --output=json --layout=cluster: exit status 2 (551.749417ms)

                                                
                                                
-- stdout --
	{"Name":"pause-955052","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-955052","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.55s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.12s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-955052 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-955052 --alsologtostderr -v=5: (1.123867396s)
--- PASS: TestPause/serial/Unpause (1.12s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.12s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-955052 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-955052 --alsologtostderr -v=5: (1.11767778s)
--- PASS: TestPause/serial/PauseAgain (1.12s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-955052 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-955052 --alsologtostderr -v=5: (2.85474743s)
--- PASS: TestPause/serial/DeletePaused (2.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.01s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-955052
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-955052: exit status 1 (67.558352ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-955052: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-790225
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-790225: (1.474938518s)
--- PASS: TestNoKubernetes/serial/Stop (1.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-790225 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-790225 --driver=docker  --container-runtime=docker: (7.468419562s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-790225 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-790225 "sudo systemctl is-active --quiet service kubelet": exit status 1 (485.993784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (114.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-260155 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-260155 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (1m54.761185999s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (114.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-368871 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
E0321 22:20:06.675637   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:20:14.171868   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
E0321 22:20:14.177114   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
E0321 22:20:14.187354   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
E0321 22:20:14.207576   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
E0321 22:20:14.247813   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
E0321 22:20:14.328397   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
E0321 22:20:14.488962   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
E0321 22:20:14.810125   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
E0321 22:20:15.450668   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
E0321 22:20:16.730950   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
E0321 22:20:19.291930   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
E0321 22:20:24.413024   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-368871 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (52.360754429s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-368871 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [caded42d-1da3-48ac-866e-42a00fb17f2f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0321 22:20:34.653780   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
helpers_test.go:344: "busybox" [caded42d-1da3-48ac-866e-42a00fb17f2f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.012438966s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-368871 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-368871 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-368871 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-368871 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-368871 --alsologtostderr -v=3: (10.939518846s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-368871 -n no-preload-368871
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-368871 -n no-preload-368871: exit status 7 (116.528792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-368871 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (582.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-368871 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
E0321 22:20:55.134917   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-368871 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (9m41.67311703s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-368871 -n no-preload-368871
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (582.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-260155 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a667d85e-6380-4ef2-8e7e-fe92d5b8a9f4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0321 22:21:14.299846   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
helpers_test.go:344: "busybox" [a667d85e-6380-4ef2-8e7e-fe92d5b8a9f4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.013444312s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-260155 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-260155 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-260155 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-260155 --alsologtostderr -v=3
E0321 22:21:28.966628   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-260155 --alsologtostderr -v=3: (10.839524646s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-260155 -n old-k8s-version-260155
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-260155 -n old-k8s-version-260155: exit status 7 (131.633982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-260155 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (34.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-260155 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0321 22:21:36.095978   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-260155 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (34.048620623s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-260155 -n old-k8s-version-260155
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (34.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-932033 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-932033 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (45.779608625s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (23.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-zxkrs" [9418bc75-aada-44d8-a863-e574d5c993cf] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-zxkrs" [9418bc75-aada-44d8-a863-e574d5c993cf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 23.011269554s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (23.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-zxkrs" [9418bc75-aada-44d8-a863-e574d5c993cf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006096084s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-260155 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-260155 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-260155 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-260155 -n old-k8s-version-260155
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-260155 -n old-k8s-version-260155: exit status 2 (510.257672ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-260155 -n old-k8s-version-260155
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-260155 -n old-k8s-version-260155: exit status 2 (496.918633ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-260155 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-260155 -n old-k8s-version-260155
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-260155 -n old-k8s-version-260155
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-009815 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-009815 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (41.308542402s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-932033 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ea91f4e1-f635-4ba5-87c0-0d581070b7ec] Pending
helpers_test.go:344: "busybox" [ea91f4e1-f635-4ba5-87c0-0d581070b7ec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ea91f4e1-f635-4ba5-87c0-0d581070b7ec] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.012858892s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-932033 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-932033 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-932033 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-932033 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-932033 --alsologtostderr -v=3: (11.063076278s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-407449 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-407449 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (45.362865237s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-932033 -n default-k8s-diff-port-932033
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-932033 -n default-k8s-diff-port-932033: exit status 7 (141.632961ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-932033 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (567.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-932033 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-932033 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (9m26.684457785s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-932033 -n default-k8s-diff-port-932033
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (567.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-009815 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-009815 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-009815 --alsologtostderr -v=3: (11.099162738s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-009815 -n newest-cni-009815
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-009815 -n newest-cni-009815: exit status 7 (117.03475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-009815 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (27.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-009815 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-009815 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (27.151189837s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-009815 -n newest-cni-009815
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (27.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-407449 create -f testdata/busybox.yaml
E0321 22:23:43.630156   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8b214b7c-5710-4f2e-96fb-5eda62adb13b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8b214b7c-5710-4f2e-96fb-5eda62adb13b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.012559299s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-407449 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-407449 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-407449 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-407449 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-407449 --alsologtostderr -v=3: (11.13200105s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-407449 -n embed-certs-407449
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-407449 -n embed-certs-407449: exit status 7 (155.753108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-407449 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (564.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-407449 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-407449 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.2: (9m23.494188165s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-407449 -n embed-certs-407449
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (564.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-009815 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-009815 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-009815 -n newest-cni-009815
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-009815 -n newest-cni-009815: exit status 2 (481.828178ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-009815 -n newest-cni-009815
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-009815 -n newest-cni-009815: exit status 2 (481.631073ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-009815 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-009815 -n newest-cni-009815
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-009815 -n newest-cni-009815
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (42.875152728s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-596814 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-596814 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-6x8g5" [d9791881-3e11-41ad-879b-d92f0ed68b64] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-6x8g5" [d9791881-3e11-41ad-879b-d92f0ed68b64] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005879933s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-596814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (55.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0321 22:25:41.856606   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
E0321 22:26:13.327205   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
E0321 22:26:13.332457   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
E0321 22:26:13.342690   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
E0321 22:26:13.362934   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
E0321 22:26:13.403200   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
E0321 22:26:13.483531   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
E0321 22:26:13.643896   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
E0321 22:26:13.964839   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
E0321 22:26:14.299729   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:26:14.605064   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
E0321 22:26:15.885534   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
E0321 22:26:18.446124   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
E0321 22:26:23.567220   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (55.082748962s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (55.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-596814 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-596814 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-4mfpr" [37b2bf35-8845-4b91-aa5e-f1a1cfe660f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0321 22:26:28.966846   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-4mfpr" [37b2bf35-8845-4b91-aa5e-f1a1cfe660f5] Running
E0321 22:26:33.807983   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005664904s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-596814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (56.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0321 22:27:35.249516   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (56.438879737s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (56.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dhs5j" [d1d3d388-ab97-4da0-a502-9adf8500857d] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.01517185s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-596814 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-596814 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-xp87c" [3259b04d-1fb5-4d88-bd61-1e09d739608b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-xp87c" [3259b04d-1fb5-4d88-bd61-1e09d739608b] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006817409s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-596814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0321 22:28:43.630532   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/ingress-addon-legacy-024295/client.crt: no such file or directory
E0321 22:28:57.169919   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
E0321 22:29:32.015199   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m11.952462969s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-427rh" [8f058860-f820-4bba-9088-b2c4c08311d3] Running
E0321 22:29:55.059796   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/auto-596814/client.crt: no such file or directory
E0321 22:29:55.065046   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/auto-596814/client.crt: no such file or directory
E0321 22:29:55.075265   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/auto-596814/client.crt: no such file or directory
E0321 22:29:55.095480   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/auto-596814/client.crt: no such file or directory
E0321 22:29:55.135744   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/auto-596814/client.crt: no such file or directory
E0321 22:29:55.216084   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/auto-596814/client.crt: no such file or directory
E0321 22:29:55.376456   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/auto-596814/client.crt: no such file or directory
E0321 22:29:55.697158   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/auto-596814/client.crt: no such file or directory
E0321 22:29:56.338155   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/auto-596814/client.crt: no such file or directory
E0321 22:29:57.618433   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/auto-596814/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.014918881s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-596814 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-596814 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-pm9wt" [361c2465-82a2-40ed-a2f5-8726c2385d13] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0321 22:30:00.178852   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/auto-596814/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-pm9wt" [361c2465-82a2-40ed-a2f5-8726c2385d13] Running
E0321 22:30:05.299680   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/auto-596814/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.005650482s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-596814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-l4h2b" [f9dd678b-aeb5-49b3-9c0a-05b96ccf721b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014185384s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0321 22:30:36.021665   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/auto-596814/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (58.7215684s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-l4h2b" [f9dd678b-aeb5-49b3-9c0a-05b96ccf721b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006922998s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-368871 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-368871 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-368871 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-368871 -n no-preload-368871
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-368871 -n no-preload-368871: exit status 2 (537.995635ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-368871 -n no-preload-368871
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-368871 -n no-preload-368871: exit status 2 (578.199501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-368871 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-368871 -n no-preload-368871
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-368871 -n no-preload-368871
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0321 22:31:13.328101   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
E0321 22:31:14.300329   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/functional-626626/client.crt: no such file or directory
E0321 22:31:16.982331   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/auto-596814/client.crt: no such file or directory
E0321 22:31:26.882658   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/enable-default-cni-596814/client.crt: no such file or directory
E0321 22:31:26.887894   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/enable-default-cni-596814/client.crt: no such file or directory
E0321 22:31:26.898164   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/enable-default-cni-596814/client.crt: no such file or directory
E0321 22:31:26.918399   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/enable-default-cni-596814/client.crt: no such file or directory
E0321 22:31:26.958637   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/enable-default-cni-596814/client.crt: no such file or directory
E0321 22:31:27.039649   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/enable-default-cni-596814/client.crt: no such file or directory
E0321 22:31:27.200398   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/enable-default-cni-596814/client.crt: no such file or directory
E0321 22:31:27.520651   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/enable-default-cni-596814/client.crt: no such file or directory
E0321 22:31:28.161713   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/enable-default-cni-596814/client.crt: no such file or directory
E0321 22:31:28.966747   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/addons-897151/client.crt: no such file or directory
E0321 22:31:29.442210   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/enable-default-cni-596814/client.crt: no such file or directory
E0321 22:31:32.002847   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/enable-default-cni-596814/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (56.113053023s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-596814 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-596814 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-l5x4w" [ea35a1dd-12bc-444a-b428-34af532c2016] Pending
helpers_test.go:344: "netcat-694fc96674-l5x4w" [ea35a1dd-12bc-444a-b428-34af532c2016] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0321 22:31:37.123617   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/enable-default-cni-596814/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-l5x4w" [ea35a1dd-12bc-444a-b428-34af532c2016] Running
E0321 22:31:41.010159   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/old-k8s-version-260155/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.005987032s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-596814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dn99s" [abff4a28-3168-48f1-83b7-9886c9e7dbe9] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.015546722s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-596814 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-596814 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-5s24q" [ed0b4804-1c0c-43b6-bb64-35a0e9ac54e9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-5s24q" [ed0b4804-1c0c-43b6-bb64-35a0e9ac54e9] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.006109261s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-596814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (48.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p false-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p false-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (48.751655806s)
--- PASS: TestNetworkPlugins/group/false/Start (48.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m22.576605862s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-dv786" [46cd59aa-4695-497f-9952-5a10f86c83c1] Running
E0321 22:32:38.903472   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/auto-596814/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013478578s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-dv786" [46cd59aa-4695-497f-9952-5a10f86c83c1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006768339s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-932033 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-932033 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-932033 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-932033 -n default-k8s-diff-port-932033
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-932033 -n default-k8s-diff-port-932033: exit status 2 (522.650841ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-932033 -n default-k8s-diff-port-932033
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-932033 -n default-k8s-diff-port-932033: exit status 2 (516.818374ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-932033 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-932033 -n default-k8s-diff-port-932033
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-932033 -n default-k8s-diff-port-932033
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (53.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0321 22:32:59.105904   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/kindnet-596814/client.crt: no such file or directory
E0321 22:32:59.111165   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/kindnet-596814/client.crt: no such file or directory
E0321 22:32:59.121428   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/kindnet-596814/client.crt: no such file or directory
E0321 22:32:59.141701   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/kindnet-596814/client.crt: no such file or directory
E0321 22:32:59.181975   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/kindnet-596814/client.crt: no such file or directory
E0321 22:32:59.262318   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/kindnet-596814/client.crt: no such file or directory
E0321 22:32:59.423728   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/kindnet-596814/client.crt: no such file or directory
E0321 22:32:59.744069   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/kindnet-596814/client.crt: no such file or directory
E0321 22:33:00.384960   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/kindnet-596814/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-596814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (53.363610797s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (53.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-596814 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-596814 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-t4hns" [1748599f-fdb0-4f7a-bb0c-9c5366cf85cb] Pending
E0321 22:33:01.666052   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/kindnet-596814/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-t4hns" [1748599f-fdb0-4f7a-bb0c-9c5366cf85cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0321 22:33:04.226324   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/kindnet-596814/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-t4hns" [1748599f-fdb0-4f7a-bb0c-9c5366cf85cb] Running
E0321 22:33:09.347097   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/kindnet-596814/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.006284134s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-596814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-4tc2t" [20c80be8-7fe9-49f3-bb14-3c92050c9b86] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014190705s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-4tc2t" [20c80be8-7fe9-49f3-bb14-3c92050c9b86] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006857753s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-407449 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-407449 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-407449 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-407449 -n embed-certs-407449
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-407449 -n embed-certs-407449: exit status 2 (485.044525ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-407449 -n embed-certs-407449
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-407449 -n embed-certs-407449: exit status 2 (486.520747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-407449 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-407449 -n embed-certs-407449
E0321 22:33:40.069032   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/kindnet-596814/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-407449 -n embed-certs-407449
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-596814 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-596814 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-xwq7f" [199f72f8-d826-400d-bfca-9f02affe8cb0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-xwq7f" [199f72f8-d826-400d-bfca-9f02affe8cb0] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.005370742s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-596814 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-596814 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-m6tvp" [3fc5d705-d2e4-456d-a163-e6d0eb0cb73e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-m6tvp" [3fc5d705-d2e4-456d-a163-e6d0eb0cb73e] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.006159443s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-596814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-596814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-596814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (19/313)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.26.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-135676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-135676
E0321 22:22:58.016445   10532 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/skaffold-516596/client.crt: no such file or directory
--- SKIP: TestStartStop/group/disable-driver-mounts (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-596814 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-596814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-596814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-596814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-596814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-596814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-596814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-596814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-596814" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-596814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-596814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-596814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-596814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-596814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-596814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-596814" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 21 Mar 2023 22:16:55 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-708049
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16124-3841/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 21 Mar 2023 22:18:12 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-955052
contexts:
- context:
cluster: kubernetes-upgrade-708049
user: kubernetes-upgrade-708049
name: kubernetes-upgrade-708049
- context:
cluster: pause-955052
extensions:
- extension:
last-update: Tue, 21 Mar 2023 22:18:12 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: pause-955052
name: pause-955052
current-context: pause-955052
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-708049
user:
client-certificate: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/kubernetes-upgrade-708049/client.crt
client-key: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/kubernetes-upgrade-708049/client.key
- name: pause-955052
user:
client-certificate: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/pause-955052/client.crt
client-key: /home/jenkins/minikube-integration/16124-3841/.minikube/profiles/pause-955052/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-596814

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-596814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-596814"

                                                
                                                
----------------------- debugLogs end: cilium-596814 [took: 3.121191936s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-596814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-596814
--- SKIP: TestNetworkPlugins/group/cilium (3.50s)

                                                
                                    
Copied to clipboard