Test Report: Docker_Linux 15909

                    
                      c3ced9e44b664dea818a5c37f69b411b40c816d1:2023-02-24:28040
                    
                

Test fail (2/308)

Order failed test Duration
200 TestMultiNode/serial/DeployApp2Nodes 5.54
201 TestMultiNode/serial/PingHostFrom2Pods 3.23
x
+
TestMultiNode/serial/DeployApp2Nodes (5.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-461512 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-461512 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-461512 -- rollout status deployment/busybox: (1.588585s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-461512 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:496: expected 2 Pod IPs but got 1, output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-461512 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-461512 -- exec busybox-6b86dd6d48-5jg4x -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-461512 -- exec busybox-6b86dd6d48-5jg4x -- nslookup kubernetes.io: exit status 1 (163.848356ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:513: Pod busybox-6b86dd6d48-5jg4x could not resolve 'kubernetes.io': exit status 1
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-461512 -- exec busybox-6b86dd6d48-tj597 -- nslookup kubernetes.io
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-461512 -- exec busybox-6b86dd6d48-5jg4x -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-461512 -- exec busybox-6b86dd6d48-5jg4x -- nslookup kubernetes.default: exit status 1 (161.889785ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:523: Pod busybox-6b86dd6d48-5jg4x could not resolve 'kubernetes.default': exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-461512 -- exec busybox-6b86dd6d48-tj597 -- nslookup kubernetes.default
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-461512 -- exec busybox-6b86dd6d48-5jg4x -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-461512 -- exec busybox-6b86dd6d48-5jg4x -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (179.601441ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:531: Pod busybox-6b86dd6d48-5jg4x could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-461512 -- exec busybox-6b86dd6d48-tj597 -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-461512
helpers_test.go:235: (dbg) docker inspect multinode-461512:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8075ab3952c8c07e2d002c8a5458b9bc0c59ce90bc9690656e8d98b634ec87cd",
	        "Created": "2023-02-24T00:56:33.396639879Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 157116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T00:56:33.759260602Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/8075ab3952c8c07e2d002c8a5458b9bc0c59ce90bc9690656e8d98b634ec87cd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8075ab3952c8c07e2d002c8a5458b9bc0c59ce90bc9690656e8d98b634ec87cd/hostname",
	        "HostsPath": "/var/lib/docker/containers/8075ab3952c8c07e2d002c8a5458b9bc0c59ce90bc9690656e8d98b634ec87cd/hosts",
	        "LogPath": "/var/lib/docker/containers/8075ab3952c8c07e2d002c8a5458b9bc0c59ce90bc9690656e8d98b634ec87cd/8075ab3952c8c07e2d002c8a5458b9bc0c59ce90bc9690656e8d98b634ec87cd-json.log",
	        "Name": "/multinode-461512",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-461512:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-461512",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e0c6db3d3a172ae33d534dbee1904935476b19f2016351ab7903192a530f397f-init/diff:/var/lib/docker/overlay2/1fe70832e0138bde815d3a324f05e073d3a1973b42aab12c10645a466ed7b978/diff:/var/lib/docker/overlay2/82b0ba24239050b2590c61fc0ca025cbcbc12de3239fa738d35253f8a5c7e972/diff:/var/lib/docker/overlay2/97149d64e56c6be885da441a048f01a2e6f93535d07240c3a6b69c63f1503930/diff:/var/lib/docker/overlay2/1ac1e7cc44d30a56fbdcbf72e6b5dab7e724aa5966044fe460e65cf440be551d/diff:/var/lib/docker/overlay2/1d2ef923b05561e68505c25afff7e0f7a174db5781e4bc0b09d004587941568a/diff:/var/lib/docker/overlay2/f6d602b2c8869a40598f32afb833eaff656758f7bd22e56b071c49e0c797ea46/diff:/var/lib/docker/overlay2/e27675cfda80daa6b54ccbc8d9b24d33061cb4a28b57e557c3d0607b1ca8c5fd/diff:/var/lib/docker/overlay2/743ac428a80ae93a9d3d1f3434309bec5bdf6ebb19ecb8f7f908698be8564088/diff:/var/lib/docker/overlay2/20ac9915298b6bc6d584f78851d401364c718a502a2859ddd9fd8401a19a7480/diff:/var/lib/docker/overlay2/36d165
c0301a63cdcbb14cf9e744eb4a46c6ff10b22e23ba1a98af8b792f377a/diff:/var/lib/docker/overlay2/38a6fe7c24710dcfc6bfd9640daf24d6f0033b8344c402c8c4a612982897a3ce/diff:/var/lib/docker/overlay2/3fdb857d38e4c0bc84111616dfc7ab74ba6995e518e517d3e2a0c14dfadc4ef8/diff:/var/lib/docker/overlay2/b1f93ca1a74f0de690373822899bcac40eacbedc6fde9a1a0b6fb748ee87db9f/diff:/var/lib/docker/overlay2/119805d1ad6f1abe3a4051c29db755a23aa5e0cc6c5216db76476a2a0b956630/diff:/var/lib/docker/overlay2/1fad59af19b8a00d817ce511b7e6b3be39ee5da67959bf3ea6050a902141b1b9/diff:/var/lib/docker/overlay2/a8d6b25a155af696d2dde78d17214a6c8b9f867b78c211c9ed1daa887f364de8/diff:/var/lib/docker/overlay2/07f6f4f06c8e18bfa8b104132cff43b5dc0f64ffb4b4c341a745abf1c058d1aa/diff:/var/lib/docker/overlay2/6146dc9e49b7cfd840dcf83603ba5654eedbdabdeba6a47ed37b9540df95b3dc/diff:/var/lib/docker/overlay2/9301871dd3992fd37d4fa495e588c9f044e10e341734e02997f3a08855c3a647/diff:/var/lib/docker/overlay2/f08d255565f3007a7033097b84d48dc5964bb491ae9da7d54ef75d803422941d/diff:/var/lib/d
ocker/overlay2/ffb7dfc431d833298f37b17ba73910970ca4887e4562867226090c024809b030/diff:/var/lib/docker/overlay2/c1fa340a85c3ccb353f2ec68e4d4208507a1fc339b0e63c299489a5ddbe5db6e/diff:/var/lib/docker/overlay2/aed9b5b3204bf14e554aeecd998e1d08f11b2c4b4643aa3942993e5bbfdcdea5/diff:/var/lib/docker/overlay2/f92f0a0a890930b99a18863e62c3af3b1ca4118f511f31f25fb30f5816f1e306/diff:/var/lib/docker/overlay2/a6001e111530a9b76c2f1f6eaa5983d7471ad99301e26a1a29e1e7e14c46fc25/diff:/var/lib/docker/overlay2/158bea0dc6daf4c80fa121667ef2be88c0e7c4dc6dc4eabfd2125a30403a7310/diff:/var/lib/docker/overlay2/0b082ffe105ffd42019f3bb0591e92c600fec4fdba58983da7ef71201342da2f/diff:/var/lib/docker/overlay2/85c8564cb266fc69d105571e429342a3a1e618f1ef232777f2f9dc0cfb7843dc/diff:/var/lib/docker/overlay2/ffee666ae571252d864d8129270279455332344b4cf1f50b5533483c483e0e29/diff:/var/lib/docker/overlay2/aa9f0f59d766b30da23b419f0ef65398bc8519903d407b98385baf7cdec79efd/diff:/var/lib/docker/overlay2/f322b716bd3a78423d2d0e16d77fbee15b4bd0803d0e65b024a925a14a7
a790a/diff:/var/lib/docker/overlay2/38b6941a9d9af30dc4abbeea1ed9f50331f557067e3f8f73e60e92669853a6b2/diff:/var/lib/docker/overlay2/4b7af9ea8b3868fc54ff26975a23a0aa3b2fdfb167e1536d80daeee27e98038c/diff:/var/lib/docker/overlay2/ee5f2aa02324c5ad9abf88568938efa32cbeeeee74b5b8bf25849922f7f34c40/diff:/var/lib/docker/overlay2/5b6043dd38472ee71b161257d55d7299454a3361c73bf42f91e41fcf318222a8/diff:/var/lib/docker/overlay2/5206772ef11c6059618ba392a15959b7a08cf16d6ecdd1acb3b7ae9b863309cf/diff:/var/lib/docker/overlay2/8b7c2d24480675d9b691b006217d2af5ed3a334f1cdbceaf50bb672c29508a0a/diff:/var/lib/docker/overlay2/3dfde0dfcb9c56924e3ecbd3ea8ebe3cac8fc1f018d7af0c25db468c6b4c56a5/diff:/var/lib/docker/overlay2/eea5f975c03e242f48308673b4fc38cf4c71bc091d7efcfa599618c68445f42a/diff:/var/lib/docker/overlay2/6ac45c1fa26e00015e1cbf85278c90a6332b7c174a6387ce98d3ec9aed6a4b38/diff:/var/lib/docker/overlay2/a661f542744c32a937ab0f1940b933cf03ef63ab8a41a662c4965de9ec1af7de/diff:/var/lib/docker/overlay2/f1da32cae243bb1c1811c9899935e81a61930c
b1a9dea9b2846986f62b09252d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e0c6db3d3a172ae33d534dbee1904935476b19f2016351ab7903192a530f397f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e0c6db3d3a172ae33d534dbee1904935476b19f2016351ab7903192a530f397f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e0c6db3d3a172ae33d534dbee1904935476b19f2016351ab7903192a530f397f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-461512",
	                "Source": "/var/lib/docker/volumes/multinode-461512/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-461512",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-461512",
	                "name.minikube.sigs.k8s.io": "multinode-461512",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "163a6c74464ae8cfbbeac5751d9ff1163430d531f6a8bdf0a7bf165fd2d7285f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32852"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32851"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32848"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32850"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32849"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/163a6c74464a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-461512": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8075ab3952c8",
	                        "multinode-461512"
	                    ],
	                    "NetworkID": "17a26df4c936d295b7bf8159a236e6bc3a572797bfdc484aaa781501d0671db6",
	                    "EndpointID": "a717154286d4b3174f793c93183e3f11a93236484cc26df4bd1dfb6a5a9e3f9f",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-461512 -n multinode-461512
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-461512 logs -n 25: (1.054924031s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p second-332319                                  | second-332319        | jenkins | v1.29.0 | 24 Feb 23 00:55 UTC | 24 Feb 23 00:55 UTC |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| delete  | -p second-332319                                  | second-332319        | jenkins | v1.29.0 | 24 Feb 23 00:55 UTC | 24 Feb 23 00:55 UTC |
	| delete  | -p first-329045                                   | first-329045         | jenkins | v1.29.0 | 24 Feb 23 00:55 UTC | 24 Feb 23 00:55 UTC |
	| start   | -p mount-start-1-980786                           | mount-start-1-980786 | jenkins | v1.29.0 | 24 Feb 23 00:55 UTC | 24 Feb 23 00:56 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| ssh     | mount-start-1-980786 ssh -- ls                    | mount-start-1-980786 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| start   | -p mount-start-2-999466                           | mount-start-2-999466 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| ssh     | mount-start-2-999466 ssh -- ls                    | mount-start-2-999466 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-980786                           | mount-start-1-980786 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-999466 ssh -- ls                    | mount-start-2-999466 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-999466                           | mount-start-2-999466 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	| start   | -p mount-start-2-999466                           | mount-start-2-999466 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	| ssh     | mount-start-2-999466 ssh -- ls                    | mount-start-2-999466 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-999466                           | mount-start-2-999466 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	| delete  | -p mount-start-1-980786                           | mount-start-1-980786 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	| start   | -p multinode-461512                               | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:57 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- apply -f                   | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- rollout                    | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- get pods -o                | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- get pods -o                | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC |                     |
	|         | busybox-6b86dd6d48-5jg4x --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | busybox-6b86dd6d48-tj597 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC |                     |
	|         | busybox-6b86dd6d48-5jg4x --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | busybox-6b86dd6d48-tj597 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC |                     |
	|         | busybox-6b86dd6d48-5jg4x -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | busybox-6b86dd6d48-tj597 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 00:56:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 00:56:26.873911  156119 out.go:296] Setting OutFile to fd 1 ...
	I0224 00:56:26.874003  156119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:56:26.874011  156119 out.go:309] Setting ErrFile to fd 2...
	I0224 00:56:26.874015  156119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:56:26.874153  156119 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3785/.minikube/bin
	I0224 00:56:26.874692  156119 out.go:303] Setting JSON to false
	I0224 00:56:26.876076  156119 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2336,"bootTime":1677197851,"procs":979,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 00:56:26.876137  156119 start.go:135] virtualization: kvm guest
	I0224 00:56:26.878547  156119 out.go:177] * [multinode-461512] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 00:56:26.880035  156119 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 00:56:26.880048  156119 notify.go:220] Checking for updates...
	I0224 00:56:26.881550  156119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 00:56:26.883930  156119 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:56:26.885601  156119 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3785/.minikube
	I0224 00:56:26.887060  156119 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 00:56:26.888472  156119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 00:56:26.889938  156119 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 00:56:26.958114  156119 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0224 00:56:26.958212  156119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 00:56:27.075270  156119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:32 SystemTime:2023-02-24 00:56:27.067045184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 00:56:27.075365  156119 docker.go:294] overlay module found
	I0224 00:56:27.077524  156119 out.go:177] * Using the docker driver based on user configuration
	I0224 00:56:27.078929  156119 start.go:296] selected driver: docker
	I0224 00:56:27.078940  156119 start.go:857] validating driver "docker" against <nil>
	I0224 00:56:27.078951  156119 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 00:56:27.079708  156119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 00:56:27.193563  156119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:32 SystemTime:2023-02-24 00:56:27.184958376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 00:56:27.193689  156119 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0224 00:56:27.193940  156119 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 00:56:27.196229  156119 out.go:177] * Using Docker driver with root privileges
	I0224 00:56:27.198040  156119 cni.go:84] Creating CNI manager for ""
	I0224 00:56:27.198055  156119 cni.go:136] 0 nodes found, recommending kindnet
	I0224 00:56:27.198077  156119 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0224 00:56:27.198090  156119 start_flags.go:319] config:
	{Name:multinode-461512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-461512 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 00:56:27.200007  156119 out.go:177] * Starting control plane node multinode-461512 in cluster multinode-461512
	I0224 00:56:27.201495  156119 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 00:56:27.203079  156119 out.go:177] * Pulling base image ...
	I0224 00:56:27.204602  156119 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 00:56:27.204631  156119 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0224 00:56:27.204640  156119 cache.go:57] Caching tarball of preloaded images
	I0224 00:56:27.204698  156119 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 00:56:27.204711  156119 preload.go:174] Found /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 00:56:27.204799  156119 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 00:56:27.205118  156119 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/config.json ...
	I0224 00:56:27.205141  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/config.json: {Name:mkc5f17fe6300edcab127e334799db6103cd1896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:27.267676  156119 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 00:56:27.267703  156119 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 00:56:27.267720  156119 cache.go:193] Successfully downloaded all kic artifacts
	I0224 00:56:27.267760  156119 start.go:364] acquiring machines lock for multinode-461512: {Name:mk1450fd8b60e8292ab20dfb5f293bf4c24349b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 00:56:27.267849  156119 start.go:368] acquired machines lock for "multinode-461512" in 69.552µs
	I0224 00:56:27.267872  156119 start.go:93] Provisioning new machine with config: &{Name:multinode-461512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-461512 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 00:56:27.267939  156119 start.go:125] createHost starting for "" (driver="docker")
	I0224 00:56:27.270255  156119 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0224 00:56:27.270440  156119 start.go:159] libmachine.API.Create for "multinode-461512" (driver="docker")
	I0224 00:56:27.270467  156119 client.go:168] LocalClient.Create starting
	I0224 00:56:27.270517  156119 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem
	I0224 00:56:27.270551  156119 main.go:141] libmachine: Decoding PEM data...
	I0224 00:56:27.270567  156119 main.go:141] libmachine: Parsing certificate...
	I0224 00:56:27.270619  156119 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem
	I0224 00:56:27.270636  156119 main.go:141] libmachine: Decoding PEM data...
	I0224 00:56:27.270647  156119 main.go:141] libmachine: Parsing certificate...
	I0224 00:56:27.270912  156119 cli_runner.go:164] Run: docker network inspect multinode-461512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0224 00:56:27.333077  156119 cli_runner.go:211] docker network inspect multinode-461512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0224 00:56:27.333140  156119 network_create.go:281] running [docker network inspect multinode-461512] to gather additional debugging logs...
	I0224 00:56:27.333160  156119 cli_runner.go:164] Run: docker network inspect multinode-461512
	W0224 00:56:27.394423  156119 cli_runner.go:211] docker network inspect multinode-461512 returned with exit code 1
	I0224 00:56:27.394449  156119 network_create.go:284] error running [docker network inspect multinode-461512]: docker network inspect multinode-461512: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-461512 not found
	I0224 00:56:27.394460  156119 network_create.go:286] output of [docker network inspect multinode-461512]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-461512 not found
	
	** /stderr **
	I0224 00:56:27.394503  156119 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 00:56:27.455645  156119 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-05e4e9615d36 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:8d:c7:71:a1} reservation:<nil>}
	I0224 00:56:27.456139  156119 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00119dd90}
	I0224 00:56:27.456162  156119 network_create.go:123] attempt to create docker network multinode-461512 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0224 00:56:27.456214  156119 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-461512 multinode-461512
	I0224 00:56:27.554017  156119 network_create.go:107] docker network multinode-461512 192.168.58.0/24 created
	I0224 00:56:27.554044  156119 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-461512" container
	I0224 00:56:27.554110  156119 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0224 00:56:27.614859  156119 cli_runner.go:164] Run: docker volume create multinode-461512 --label name.minikube.sigs.k8s.io=multinode-461512 --label created_by.minikube.sigs.k8s.io=true
	I0224 00:56:27.677245  156119 oci.go:103] Successfully created a docker volume multinode-461512
	I0224 00:56:27.677350  156119 cli_runner.go:164] Run: docker run --rm --name multinode-461512-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-461512 --entrypoint /usr/bin/test -v multinode-461512:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0224 00:56:28.282586  156119 oci.go:107] Successfully prepared a docker volume multinode-461512
	I0224 00:56:28.282625  156119 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 00:56:28.282643  156119 kic.go:190] Starting extracting preloaded images to volume ...
	I0224 00:56:28.282700  156119 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-461512:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0224 00:56:33.221616  156119 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-461512:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (4.938879081s)
	I0224 00:56:33.221645  156119 kic.go:199] duration metric: took 4.938999 seconds to extract preloaded images to volume
	W0224 00:56:33.221783  156119 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0224 00:56:33.221873  156119 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0224 00:56:33.336975  156119 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-461512 --name multinode-461512 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-461512 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-461512 --network multinode-461512 --ip 192.168.58.2 --volume multinode-461512:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0224 00:56:33.766940  156119 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Running}}
	I0224 00:56:33.835216  156119 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Status}}
	I0224 00:56:33.903211  156119 cli_runner.go:164] Run: docker exec multinode-461512 stat /var/lib/dpkg/alternatives/iptables
	I0224 00:56:34.023451  156119 oci.go:144] the created container "multinode-461512" has a running status.
	I0224 00:56:34.023494  156119 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa...
	I0224 00:56:34.149337  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0224 00:56:34.149415  156119 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0224 00:56:34.267924  156119 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Status}}
	I0224 00:56:34.335216  156119 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0224 00:56:34.335249  156119 kic_runner.go:114] Args: [docker exec --privileged multinode-461512 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0224 00:56:34.447454  156119 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Status}}
	I0224 00:56:34.510265  156119 machine.go:88] provisioning docker machine ...
	I0224 00:56:34.510299  156119 ubuntu.go:169] provisioning hostname "multinode-461512"
	I0224 00:56:34.510355  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:34.570139  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:56:34.570585  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0224 00:56:34.570608  156119 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-461512 && echo "multinode-461512" | sudo tee /etc/hostname
	I0224 00:56:34.705086  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-461512
	
	I0224 00:56:34.705147  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:34.770868  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:56:34.771436  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0224 00:56:34.771466  156119 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-461512' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-461512/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-461512' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 00:56:34.901142  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 00:56:34.901175  156119 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15909-3785/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-3785/.minikube}
	I0224 00:56:34.901198  156119 ubuntu.go:177] setting up certificates
	I0224 00:56:34.901207  156119 provision.go:83] configureAuth start
	I0224 00:56:34.901267  156119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461512
	I0224 00:56:34.962515  156119 provision.go:138] copyHostCerts
	I0224 00:56:34.962556  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem
	I0224 00:56:34.962583  156119 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem, removing ...
	I0224 00:56:34.962593  156119 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem
	I0224 00:56:34.962663  156119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem (1123 bytes)
	I0224 00:56:34.962743  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem
	I0224 00:56:34.962766  156119 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem, removing ...
	I0224 00:56:34.962774  156119 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem
	I0224 00:56:34.962802  156119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem (1675 bytes)
	I0224 00:56:34.962872  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem
	I0224 00:56:34.962896  156119 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem, removing ...
	I0224 00:56:34.962905  156119 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem
	I0224 00:56:34.962938  156119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem (1078 bytes)
	I0224 00:56:34.962999  156119 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca-key.pem org=jenkins.multinode-461512 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-461512]
	I0224 00:56:35.092954  156119 provision.go:172] copyRemoteCerts
	I0224 00:56:35.093010  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 00:56:35.093040  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:35.156066  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:56:35.248801  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0224 00:56:35.248871  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 00:56:35.265353  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0224 00:56:35.265410  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0224 00:56:35.281364  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0224 00:56:35.281420  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 00:56:35.297448  156119 provision.go:86] duration metric: configureAuth took 396.225503ms
	I0224 00:56:35.297476  156119 ubuntu.go:193] setting minikube options for container-runtime
	I0224 00:56:35.297667  156119 config.go:182] Loaded profile config "multinode-461512": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 00:56:35.297721  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:35.362747  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:56:35.363328  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0224 00:56:35.363350  156119 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 00:56:35.493514  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 00:56:35.493534  156119 ubuntu.go:71] root file system type: overlay
	I0224 00:56:35.493657  156119 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 00:56:35.493710  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:35.558616  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:56:35.559156  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0224 00:56:35.559259  156119 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 00:56:35.697748  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 00:56:35.697810  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:35.760792  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:56:35.761192  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0224 00:56:35.761211  156119 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 00:56:36.378180  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 00:56:35.693574257 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0224 00:56:36.378209  156119 machine.go:91] provisioned docker machine in 1.867921549s
	I0224 00:56:36.378217  156119 client.go:171] LocalClient.Create took 9.107743068s
	I0224 00:56:36.378234  156119 start.go:167] duration metric: libmachine.API.Create for "multinode-461512" took 9.10779417s
	I0224 00:56:36.378241  156119 start.go:300] post-start starting for "multinode-461512" (driver="docker")
	I0224 00:56:36.378246  156119 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 00:56:36.378295  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 00:56:36.378328  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:36.439510  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:56:36.532927  156119 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 00:56:36.535317  156119 command_runner.go:130] > NAME="Ubuntu"
	I0224 00:56:36.535333  156119 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0224 00:56:36.535340  156119 command_runner.go:130] > ID=ubuntu
	I0224 00:56:36.535344  156119 command_runner.go:130] > ID_LIKE=debian
	I0224 00:56:36.535349  156119 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0224 00:56:36.535353  156119 command_runner.go:130] > VERSION_ID="20.04"
	I0224 00:56:36.535361  156119 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0224 00:56:36.535368  156119 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0224 00:56:36.535383  156119 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0224 00:56:36.535396  156119 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0224 00:56:36.535404  156119 command_runner.go:130] > VERSION_CODENAME=focal
	I0224 00:56:36.535412  156119 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0224 00:56:36.535464  156119 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 00:56:36.535479  156119 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 00:56:36.535487  156119 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 00:56:36.535493  156119 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 00:56:36.535501  156119 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3785/.minikube/addons for local assets ...
	I0224 00:56:36.535542  156119 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3785/.minikube/files for local assets ...
	I0224 00:56:36.535617  156119 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem -> 104702.pem in /etc/ssl/certs
	I0224 00:56:36.535627  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem -> /etc/ssl/certs/104702.pem
	I0224 00:56:36.535707  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 00:56:36.541717  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem --> /etc/ssl/certs/104702.pem (1708 bytes)
	I0224 00:56:36.557600  156119 start.go:303] post-start completed in 179.348653ms
	I0224 00:56:36.557909  156119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461512
	I0224 00:56:36.620493  156119 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/config.json ...
	I0224 00:56:36.620722  156119 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 00:56:36.620766  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:36.682430  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:56:36.769875  156119 command_runner.go:130] > 16%!
	(MISSING)I0224 00:56:36.769953  156119 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 00:56:36.773511  156119 command_runner.go:130] > 246G
	I0224 00:56:36.773535  156119 start.go:128] duration metric: createHost completed in 9.505589343s
	I0224 00:56:36.773545  156119 start.go:83] releasing machines lock for "multinode-461512", held for 9.505686385s
	I0224 00:56:36.773607  156119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461512
	I0224 00:56:36.838745  156119 ssh_runner.go:195] Run: cat /version.json
	I0224 00:56:36.838787  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:36.838841  156119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 00:56:36.838903  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:36.903175  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:56:36.905176  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:56:37.023088  156119 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0224 00:56:37.024457  156119 command_runner.go:130] > {"iso_version": "v1.29.0-1676397967-15752", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "1ecebb4330bc6283999d4ca9b3c62a9eeee8c692"}
	I0224 00:56:37.024572  156119 ssh_runner.go:195] Run: systemctl --version
	I0224 00:56:37.027986  156119 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0224 00:56:37.028005  156119 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0224 00:56:37.028050  156119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 00:56:37.031393  156119 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0224 00:56:37.031408  156119 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0224 00:56:37.031415  156119 command_runner.go:130] > Device: 34h/52d	Inode: 1319702     Links: 1
	I0224 00:56:37.031421  156119 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 00:56:37.031430  156119 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0224 00:56:37.031437  156119 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0224 00:56:37.031448  156119 command_runner.go:130] > Change: 2023-02-24 00:41:21.061607898 +0000
	I0224 00:56:37.031457  156119 command_runner.go:130] >  Birth: -
	I0224 00:56:37.031585  156119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0224 00:56:37.050031  156119 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0224 00:56:37.050186  156119 ssh_runner.go:195] Run: which cri-dockerd
	I0224 00:56:37.052699  156119 command_runner.go:130] > /usr/bin/cri-dockerd
	I0224 00:56:37.052809  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 00:56:37.059028  156119 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 00:56:37.070915  156119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 00:56:37.084892  156119 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0224 00:56:37.084915  156119 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0224 00:56:37.084925  156119 start.go:485] detecting cgroup driver to use...
	I0224 00:56:37.084950  156119 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 00:56:37.085032  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 00:56:37.095926  156119 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0224 00:56:37.095944  156119 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0224 00:56:37.096606  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 00:56:37.104062  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 00:56:37.111066  156119 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 00:56:37.111114  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 00:56:37.118164  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 00:56:37.124868  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 00:56:37.131719  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 00:56:37.138439  156119 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 00:56:37.144572  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 00:56:37.151902  156119 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 00:56:37.157204  156119 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0224 00:56:37.157753  156119 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 00:56:37.163514  156119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 00:56:37.230937  156119 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 00:56:37.299453  156119 start.go:485] detecting cgroup driver to use...
	I0224 00:56:37.299501  156119 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 00:56:37.299548  156119 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 00:56:37.307899  156119 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0224 00:56:37.308021  156119 command_runner.go:130] > [Unit]
	I0224 00:56:37.308042  156119 command_runner.go:130] > Description=Docker Application Container Engine
	I0224 00:56:37.308050  156119 command_runner.go:130] > Documentation=https://docs.docker.com
	I0224 00:56:37.308057  156119 command_runner.go:130] > BindsTo=containerd.service
	I0224 00:56:37.308074  156119 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0224 00:56:37.308085  156119 command_runner.go:130] > Wants=network-online.target
	I0224 00:56:37.308099  156119 command_runner.go:130] > Requires=docker.socket
	I0224 00:56:37.308107  156119 command_runner.go:130] > StartLimitBurst=3
	I0224 00:56:37.308115  156119 command_runner.go:130] > StartLimitIntervalSec=60
	I0224 00:56:37.308124  156119 command_runner.go:130] > [Service]
	I0224 00:56:37.308129  156119 command_runner.go:130] > Type=notify
	I0224 00:56:37.308138  156119 command_runner.go:130] > Restart=on-failure
	I0224 00:56:37.308149  156119 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0224 00:56:37.308168  156119 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0224 00:56:37.308182  156119 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0224 00:56:37.308196  156119 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0224 00:56:37.308212  156119 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0224 00:56:37.308222  156119 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0224 00:56:37.308233  156119 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0224 00:56:37.308246  156119 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0224 00:56:37.308261  156119 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0224 00:56:37.308269  156119 command_runner.go:130] > ExecStart=
	I0224 00:56:37.308294  156119 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0224 00:56:37.308306  156119 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0224 00:56:37.308317  156119 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0224 00:56:37.308330  156119 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0224 00:56:37.308340  156119 command_runner.go:130] > LimitNOFILE=infinity
	I0224 00:56:37.308350  156119 command_runner.go:130] > LimitNPROC=infinity
	I0224 00:56:37.308357  156119 command_runner.go:130] > LimitCORE=infinity
	I0224 00:56:37.308369  156119 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0224 00:56:37.308381  156119 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0224 00:56:37.308392  156119 command_runner.go:130] > TasksMax=infinity
	I0224 00:56:37.308401  156119 command_runner.go:130] > TimeoutStartSec=0
	I0224 00:56:37.308412  156119 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0224 00:56:37.308421  156119 command_runner.go:130] > Delegate=yes
	I0224 00:56:37.308430  156119 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0224 00:56:37.308441  156119 command_runner.go:130] > KillMode=process
	I0224 00:56:37.308457  156119 command_runner.go:130] > [Install]
	I0224 00:56:37.308467  156119 command_runner.go:130] > WantedBy=multi-user.target
	I0224 00:56:37.308846  156119 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 00:56:37.308906  156119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 00:56:37.317625  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 00:56:37.330783  156119 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 00:56:37.330801  156119 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 00:56:37.330846  156119 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 00:56:37.413769  156119 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 00:56:37.493630  156119 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 00:56:37.493664  156119 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 00:56:37.506311  156119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 00:56:37.587705  156119 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 00:56:37.780085  156119 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 00:56:37.859962  156119 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0224 00:56:37.860034  156119 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 00:56:37.931654  156119 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 00:56:38.003044  156119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 00:56:38.072134  156119 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 00:56:38.082148  156119 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 00:56:38.082204  156119 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 00:56:38.084772  156119 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0224 00:56:38.084794  156119 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0224 00:56:38.084801  156119 command_runner.go:130] > Device: 3fh/63d	Inode: 206         Links: 1
	I0224 00:56:38.084807  156119 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0224 00:56:38.084813  156119 command_runner.go:130] > Access: 2023-02-24 00:56:38.077813978 +0000
	I0224 00:56:38.084818  156119 command_runner.go:130] > Modify: 2023-02-24 00:56:38.077813978 +0000
	I0224 00:56:38.084822  156119 command_runner.go:130] > Change: 2023-02-24 00:56:38.077813978 +0000
	I0224 00:56:38.084826  156119 command_runner.go:130] >  Birth: -
	I0224 00:56:38.084877  156119 start.go:553] Will wait 60s for crictl version
	I0224 00:56:38.084927  156119 ssh_runner.go:195] Run: which crictl
	I0224 00:56:38.087254  156119 command_runner.go:130] > /usr/bin/crictl
	I0224 00:56:38.087305  156119 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 00:56:38.159679  156119 command_runner.go:130] > Version:  0.1.0
	I0224 00:56:38.159697  156119 command_runner.go:130] > RuntimeName:  docker
	I0224 00:56:38.159701  156119 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0224 00:56:38.159707  156119 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0224 00:56:38.161040  156119 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0224 00:56:38.161101  156119 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 00:56:38.181407  156119 command_runner.go:130] > 23.0.1
	I0224 00:56:38.181463  156119 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 00:56:38.200527  156119 command_runner.go:130] > 23.0.1
	I0224 00:56:38.204165  156119 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0224 00:56:38.204242  156119 cli_runner.go:164] Run: docker network inspect multinode-461512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 00:56:38.265920  156119 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0224 00:56:38.268985  156119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 00:56:38.277933  156119 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 00:56:38.277995  156119 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 00:56:38.293187  156119 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0224 00:56:38.293214  156119 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0224 00:56:38.293221  156119 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0224 00:56:38.293231  156119 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0224 00:56:38.293237  156119 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0224 00:56:38.293246  156119 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0224 00:56:38.293252  156119 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0224 00:56:38.293263  156119 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 00:56:38.294525  156119 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 00:56:38.294545  156119 docker.go:560] Images already preloaded, skipping extraction
	I0224 00:56:38.294595  156119 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 00:56:38.310945  156119 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0224 00:56:38.310963  156119 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0224 00:56:38.310968  156119 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0224 00:56:38.310978  156119 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0224 00:56:38.310982  156119 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0224 00:56:38.310988  156119 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0224 00:56:38.310995  156119 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0224 00:56:38.311006  156119 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 00:56:38.311038  156119 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 00:56:38.311049  156119 cache_images.go:84] Images are preloaded, skipping loading
	I0224 00:56:38.311090  156119 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 00:56:38.330852  156119 command_runner.go:130] > cgroupfs
	I0224 00:56:38.332033  156119 cni.go:84] Creating CNI manager for ""
	I0224 00:56:38.332048  156119 cni.go:136] 1 nodes found, recommending kindnet
	I0224 00:56:38.332062  156119 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 00:56:38.332089  156119 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-461512 NodeName:multinode-461512 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 00:56:38.332230  156119 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-461512"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 00:56:38.332315  156119 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-461512 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-461512 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 00:56:38.332364  156119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 00:56:38.338168  156119 command_runner.go:130] > kubeadm
	I0224 00:56:38.338183  156119 command_runner.go:130] > kubectl
	I0224 00:56:38.338186  156119 command_runner.go:130] > kubelet
	I0224 00:56:38.338735  156119 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 00:56:38.338786  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 00:56:38.344787  156119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0224 00:56:38.356127  156119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 00:56:38.367214  156119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0224 00:56:38.378556  156119 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0224 00:56:38.381008  156119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 00:56:38.389177  156119 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512 for IP: 192.168.58.2
	I0224 00:56:38.389209  156119 certs.go:186] acquiring lock for shared ca certs: {Name:mk4ccb66e3fb9104eb70d9107cb5563409a81019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:38.389322  156119 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.key
	I0224 00:56:38.389357  156119 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.key
	I0224 00:56:38.389393  156119 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key
	I0224 00:56:38.389404  156119 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt with IP's: []
	I0224 00:56:38.550905  156119 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt ...
	I0224 00:56:38.550929  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt: {Name:mkafd0f423e00282b1b80243bc87a0ef26cc5d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:38.551073  156119 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key ...
	I0224 00:56:38.551084  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key: {Name:mk5a620a352449f2cb23b01bb46cef5a02dbb2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:38.551151  156119 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.key.cee25041
	I0224 00:56:38.551164  156119 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0224 00:56:38.838168  156119 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.crt.cee25041 ...
	I0224 00:56:38.838194  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.crt.cee25041: {Name:mkb8322635e4298b3da32d32211030b8ff4d5117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:38.838330  156119 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.key.cee25041 ...
	I0224 00:56:38.838340  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.key.cee25041: {Name:mk4e888a9de7fdd8f3164b7a40013da92cef9186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:38.838401  156119 certs.go:333] copying /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.crt
	I0224 00:56:38.838461  156119 certs.go:337] copying /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.key
	I0224 00:56:38.838505  156119 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.key
	I0224 00:56:38.838517  156119 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.crt with IP's: []
	I0224 00:56:38.981872  156119 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.crt ...
	I0224 00:56:38.981900  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.crt: {Name:mkc6815434daf237e1887623e67b42e18f74a84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:38.982037  156119 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.key ...
	I0224 00:56:38.982046  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.key: {Name:mk8aa28fb7dd19a668557103ac8ed3108ce67ea9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:38.982122  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0224 00:56:38.982139  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0224 00:56:38.982148  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0224 00:56:38.982160  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0224 00:56:38.982169  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0224 00:56:38.982181  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0224 00:56:38.982193  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0224 00:56:38.982208  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0224 00:56:38.982264  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470.pem (1338 bytes)
	W0224 00:56:38.982300  156119 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470_empty.pem, impossibly tiny 0 bytes
	I0224 00:56:38.982310  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 00:56:38.982333  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem (1078 bytes)
	I0224 00:56:38.982361  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem (1123 bytes)
	I0224 00:56:38.982382  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem (1675 bytes)
	I0224 00:56:38.982418  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem (1708 bytes)
	I0224 00:56:38.982445  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem -> /usr/share/ca-certificates/104702.pem
	I0224 00:56:38.982459  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:56:38.982474  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470.pem -> /usr/share/ca-certificates/10470.pem
	I0224 00:56:38.982949  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 00:56:39.000389  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0224 00:56:39.016474  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 00:56:39.032358  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0224 00:56:39.048338  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 00:56:39.063738  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 00:56:39.079202  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 00:56:39.094289  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 00:56:39.109512  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem --> /usr/share/ca-certificates/104702.pem (1708 bytes)
	I0224 00:56:39.124994  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 00:56:39.140120  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470.pem --> /usr/share/ca-certificates/10470.pem (1338 bytes)
	I0224 00:56:39.155148  156119 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 00:56:39.166172  156119 ssh_runner.go:195] Run: openssl version
	I0224 00:56:39.170208  156119 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0224 00:56:39.170474  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 00:56:39.176912  156119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:56:39.179556  156119 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:56:39.179633  156119 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:56:39.179673  156119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:56:39.183599  156119 command_runner.go:130] > b5213941
	I0224 00:56:39.183715  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 00:56:39.189989  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10470.pem && ln -fs /usr/share/ca-certificates/10470.pem /etc/ssl/certs/10470.pem"
	I0224 00:56:39.196414  156119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10470.pem
	I0224 00:56:39.198952  156119 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 00:45 /usr/share/ca-certificates/10470.pem
	I0224 00:56:39.199009  156119 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:45 /usr/share/ca-certificates/10470.pem
	I0224 00:56:39.199036  156119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10470.pem
	I0224 00:56:39.202956  156119 command_runner.go:130] > 51391683
	I0224 00:56:39.203141  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10470.pem /etc/ssl/certs/51391683.0"
	I0224 00:56:39.209360  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/104702.pem && ln -fs /usr/share/ca-certificates/104702.pem /etc/ssl/certs/104702.pem"
	I0224 00:56:39.215931  156119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/104702.pem
	I0224 00:56:39.218533  156119 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 00:45 /usr/share/ca-certificates/104702.pem
	I0224 00:56:39.218651  156119 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:45 /usr/share/ca-certificates/104702.pem
	I0224 00:56:39.218687  156119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/104702.pem
	I0224 00:56:39.222902  156119 command_runner.go:130] > 3ec20f2e
	I0224 00:56:39.222946  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/104702.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 00:56:39.229292  156119 kubeadm.go:401] StartCluster: {Name:multinode-461512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-461512 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 00:56:39.229399  156119 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 00:56:39.245533  156119 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 00:56:39.251861  156119 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0224 00:56:39.251886  156119 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0224 00:56:39.251893  156119 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0224 00:56:39.251936  156119 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 00:56:39.258076  156119 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0224 00:56:39.258115  156119 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 00:56:39.264000  156119 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0224 00:56:39.264024  156119 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0224 00:56:39.264037  156119 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0224 00:56:39.264045  156119 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 00:56:39.264071  156119 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 00:56:39.264094  156119 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 00:56:39.301420  156119 kubeadm.go:322] W0224 00:56:39.300738    1404 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 00:56:39.301442  156119 command_runner.go:130] ! W0224 00:56:39.300738    1404 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 00:56:39.339556  156119 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1029-gcp\n", err: exit status 1
	I0224 00:56:39.339591  156119 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1029-gcp\n", err: exit status 1
	I0224 00:56:39.400297  156119 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 00:56:39.400324  156119 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 00:56:51.057939  156119 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0224 00:56:51.057962  156119 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0224 00:56:51.058021  156119 kubeadm.go:322] [preflight] Running pre-flight checks
	I0224 00:56:51.058083  156119 command_runner.go:130] > [preflight] Running pre-flight checks
	I0224 00:56:51.058218  156119 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0224 00:56:51.058232  156119 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0224 00:56:51.058303  156119 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1029-gcp
	I0224 00:56:51.058315  156119 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1029-gcp
	I0224 00:56:51.058371  156119 kubeadm.go:322] OS: Linux
	I0224 00:56:51.058383  156119 command_runner.go:130] > OS: Linux
	I0224 00:56:51.058440  156119 kubeadm.go:322] CGROUPS_CPU: enabled
	I0224 00:56:51.058451  156119 command_runner.go:130] > CGROUPS_CPU: enabled
	I0224 00:56:51.058514  156119 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0224 00:56:51.058525  156119 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0224 00:56:51.058586  156119 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0224 00:56:51.058604  156119 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0224 00:56:51.058667  156119 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0224 00:56:51.058682  156119 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0224 00:56:51.058747  156119 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0224 00:56:51.058757  156119 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0224 00:56:51.058823  156119 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0224 00:56:51.058833  156119 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0224 00:56:51.058892  156119 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0224 00:56:51.058906  156119 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0224 00:56:51.058973  156119 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0224 00:56:51.058982  156119 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0224 00:56:51.059043  156119 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0224 00:56:51.059054  156119 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0224 00:56:51.059145  156119 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 00:56:51.059158  156119 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 00:56:51.059278  156119 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 00:56:51.059289  156119 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 00:56:51.059413  156119 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 00:56:51.059424  156119 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 00:56:51.059503  156119 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 00:56:51.061269  156119 out.go:204]   - Generating certificates and keys ...
	I0224 00:56:51.059648  156119 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 00:56:51.061393  156119 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0224 00:56:51.061410  156119 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0224 00:56:51.061501  156119 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0224 00:56:51.061521  156119 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0224 00:56:51.061611  156119 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 00:56:51.061624  156119 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 00:56:51.061709  156119 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0224 00:56:51.061728  156119 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0224 00:56:51.061816  156119 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0224 00:56:51.061832  156119 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0224 00:56:51.061911  156119 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0224 00:56:51.061924  156119 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0224 00:56:51.062009  156119 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0224 00:56:51.062022  156119 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0224 00:56:51.062206  156119 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-461512] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 00:56:51.062225  156119 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-461512] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 00:56:51.062310  156119 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0224 00:56:51.062351  156119 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0224 00:56:51.062503  156119 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-461512] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 00:56:51.062512  156119 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-461512] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 00:56:51.062603  156119 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 00:56:51.062617  156119 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 00:56:51.062694  156119 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 00:56:51.062704  156119 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 00:56:51.062768  156119 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0224 00:56:51.062778  156119 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0224 00:56:51.062863  156119 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 00:56:51.062874  156119 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 00:56:51.062941  156119 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 00:56:51.062951  156119 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 00:56:51.063014  156119 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 00:56:51.063026  156119 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 00:56:51.063122  156119 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 00:56:51.063138  156119 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 00:56:51.063223  156119 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 00:56:51.063236  156119 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 00:56:51.063415  156119 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 00:56:51.063431  156119 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 00:56:51.063544  156119 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 00:56:51.063555  156119 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 00:56:51.063620  156119 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0224 00:56:51.063637  156119 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0224 00:56:51.063754  156119 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 00:56:51.065411  156119 out.go:204]   - Booting up control plane ...
	I0224 00:56:51.063794  156119 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 00:56:51.065526  156119 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 00:56:51.065540  156119 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 00:56:51.065627  156119 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 00:56:51.065646  156119 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 00:56:51.065732  156119 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 00:56:51.065743  156119 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 00:56:51.065846  156119 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 00:56:51.065857  156119 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 00:56:51.066012  156119 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 00:56:51.066043  156119 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 00:56:51.066186  156119 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002046 seconds
	I0224 00:56:51.066201  156119 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.002046 seconds
	I0224 00:56:51.066352  156119 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0224 00:56:51.066367  156119 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0224 00:56:51.066537  156119 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0224 00:56:51.066546  156119 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0224 00:56:51.066616  156119 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0224 00:56:51.066626  156119 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0224 00:56:51.066800  156119 kubeadm.go:322] [mark-control-plane] Marking the node multinode-461512 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0224 00:56:51.066808  156119 command_runner.go:130] > [mark-control-plane] Marking the node multinode-461512 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0224 00:56:51.066879  156119 kubeadm.go:322] [bootstrap-token] Using token: 7kk0e7.ephgzxkdwnb2txax
	I0224 00:56:51.068497  156119 out.go:204]   - Configuring RBAC rules ...
	I0224 00:56:51.066918  156119 command_runner.go:130] > [bootstrap-token] Using token: 7kk0e7.ephgzxkdwnb2txax
	I0224 00:56:51.068601  156119 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 00:56:51.068612  156119 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 00:56:51.068724  156119 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 00:56:51.068744  156119 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 00:56:51.068881  156119 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 00:56:51.068889  156119 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 00:56:51.069045  156119 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 00:56:51.069067  156119 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 00:56:51.069216  156119 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 00:56:51.069228  156119 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 00:56:51.069310  156119 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 00:56:51.069316  156119 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 00:56:51.069410  156119 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 00:56:51.069416  156119 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 00:56:51.069468  156119 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0224 00:56:51.069484  156119 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0224 00:56:51.069544  156119 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0224 00:56:51.069555  156119 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0224 00:56:51.069562  156119 kubeadm.go:322] 
	I0224 00:56:51.069626  156119 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0224 00:56:51.069634  156119 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0224 00:56:51.069638  156119 kubeadm.go:322] 
	I0224 00:56:51.069713  156119 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0224 00:56:51.069723  156119 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0224 00:56:51.069729  156119 kubeadm.go:322] 
	I0224 00:56:51.069767  156119 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0224 00:56:51.069778  156119 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0224 00:56:51.069851  156119 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 00:56:51.069860  156119 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 00:56:51.069924  156119 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 00:56:51.069934  156119 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 00:56:51.069943  156119 kubeadm.go:322] 
	I0224 00:56:51.070033  156119 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0224 00:56:51.070041  156119 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0224 00:56:51.070045  156119 kubeadm.go:322] 
	I0224 00:56:51.070128  156119 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 00:56:51.070145  156119 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 00:56:51.070156  156119 kubeadm.go:322] 
	I0224 00:56:51.070224  156119 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0224 00:56:51.070230  156119 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0224 00:56:51.070326  156119 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 00:56:51.070336  156119 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 00:56:51.070422  156119 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 00:56:51.070436  156119 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 00:56:51.070448  156119 kubeadm.go:322] 
	I0224 00:56:51.070536  156119 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0224 00:56:51.070542  156119 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0224 00:56:51.070625  156119 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0224 00:56:51.070640  156119 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0224 00:56:51.070650  156119 kubeadm.go:322] 
	I0224 00:56:51.070779  156119 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7kk0e7.ephgzxkdwnb2txax \
	I0224 00:56:51.070795  156119 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 7kk0e7.ephgzxkdwnb2txax \
	I0224 00:56:51.070928  156119 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bc80f60e14a6b9b559fc179e503c895fcccd0d05d03dee10e43de88c94ec0cb4 \
	I0224 00:56:51.070940  156119 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:bc80f60e14a6b9b559fc179e503c895fcccd0d05d03dee10e43de88c94ec0cb4 \
	I0224 00:56:51.070982  156119 kubeadm.go:322] 	--control-plane 
	I0224 00:56:51.070994  156119 command_runner.go:130] > 	--control-plane 
	I0224 00:56:51.071005  156119 kubeadm.go:322] 
	I0224 00:56:51.071117  156119 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0224 00:56:51.071129  156119 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0224 00:56:51.071135  156119 kubeadm.go:322] 
	I0224 00:56:51.071234  156119 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7kk0e7.ephgzxkdwnb2txax \
	I0224 00:56:51.071243  156119 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 7kk0e7.ephgzxkdwnb2txax \
	I0224 00:56:51.071323  156119 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bc80f60e14a6b9b559fc179e503c895fcccd0d05d03dee10e43de88c94ec0cb4 
	I0224 00:56:51.071330  156119 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:bc80f60e14a6b9b559fc179e503c895fcccd0d05d03dee10e43de88c94ec0cb4 
	I0224 00:56:51.071344  156119 cni.go:84] Creating CNI manager for ""
	I0224 00:56:51.071356  156119 cni.go:136] 1 nodes found, recommending kindnet
	I0224 00:56:51.072989  156119 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0224 00:56:51.074346  156119 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0224 00:56:51.077387  156119 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0224 00:56:51.077406  156119 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0224 00:56:51.077416  156119 command_runner.go:130] > Device: 34h/52d	Inode: 1317791     Links: 1
	I0224 00:56:51.077425  156119 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 00:56:51.077434  156119 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0224 00:56:51.077446  156119 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0224 00:56:51.077454  156119 command_runner.go:130] > Change: 2023-02-24 00:41:20.329534418 +0000
	I0224 00:56:51.077472  156119 command_runner.go:130] >  Birth: -
	I0224 00:56:51.077544  156119 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0224 00:56:51.077564  156119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0224 00:56:51.093145  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0224 00:56:51.754878  156119 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0224 00:56:51.760806  156119 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0224 00:56:51.766111  156119 command_runner.go:130] > serviceaccount/kindnet created
	I0224 00:56:51.776190  156119 command_runner.go:130] > daemonset.apps/kindnet created
	I0224 00:56:51.779605  156119 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 00:56:51.779732  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=c13299ce0b45f38f7f45d3bc31124c3ea59c0510 minikube.k8s.io/name=multinode-461512 minikube.k8s.io/updated_at=2023_02_24T00_56_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:51.779732  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:51.786415  156119 command_runner.go:130] > -16
	I0224 00:56:51.786479  156119 ops.go:34] apiserver oom_adj: -16
	I0224 00:56:51.868085  156119 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0224 00:56:51.868178  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:51.872766  156119 command_runner.go:130] > node/multinode-461512 labeled
	I0224 00:56:51.926103  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:52.429033  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:52.490103  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:52.928665  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:52.989441  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:53.429319  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:53.487454  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:53.928500  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:53.984992  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:54.428598  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:54.487697  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:54.929351  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:54.987839  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:55.428648  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:55.489701  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:55.929379  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:55.989542  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:56.429161  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:56.491917  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:56.929397  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:56.990258  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:57.428825  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:57.490396  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:57.929028  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:57.987311  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:58.429060  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:58.486546  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:58.928830  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:58.987208  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:59.429201  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:59.488299  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:59.929370  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:59.990335  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:00.428554  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:00.488248  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:00.928524  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:00.987544  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:01.429489  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:01.486969  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:01.929290  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:01.990759  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:02.428652  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:02.558544  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:02.929095  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:02.989541  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:03.428551  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:03.488262  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:03.928613  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:03.992047  156119 command_runner.go:130] > NAME      SECRETS   AGE
	I0224 00:57:03.992076  156119 command_runner.go:130] > default   0         0s
	I0224 00:57:03.994465  156119 kubeadm.go:1073] duration metric: took 12.214785856s to wait for elevateKubeSystemPrivileges.
	I0224 00:57:03.994495  156119 kubeadm.go:403] StartCluster complete in 24.765210823s
	I0224 00:57:03.994512  156119 settings.go:142] acquiring lock: {Name:mkee07ffcb1920ada8b15d9b3d3940c229b3dfc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:57:03.994587  156119 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:57:03.995299  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/kubeconfig: {Name:mk3a4444ec91b5e085feb2b9897845e988f9c9bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:57:03.995495  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0224 00:57:03.995643  156119 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0224 00:57:03.995751  156119 addons.go:65] Setting storage-provisioner=true in profile "multinode-461512"
	I0224 00:57:03.995772  156119 addons.go:227] Setting addon storage-provisioner=true in "multinode-461512"
	I0224 00:57:03.995770  156119 addons.go:65] Setting default-storageclass=true in profile "multinode-461512"
	I0224 00:57:03.995803  156119 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-461512"
	I0224 00:57:03.995831  156119 host.go:66] Checking if "multinode-461512" exists ...
	I0224 00:57:03.995773  156119 config.go:182] Loaded profile config "multinode-461512": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 00:57:03.995872  156119 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:57:03.996157  156119 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Status}}
	I0224 00:57:03.996140  156119 kapi.go:59] client config for multinode-461512: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 00:57:03.996333  156119 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Status}}
	I0224 00:57:04.000853  156119 cert_rotation.go:137] Starting client certificate rotation controller
	I0224 00:57:04.001173  156119 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 00:57:04.001192  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:04.001204  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:04.001218  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:04.014777  156119 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0224 00:57:04.014799  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:04.014807  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:04.014813  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:04.014820  156119 round_trippers.go:580]     Content-Length: 291
	I0224 00:57:04.014831  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:04 GMT
	I0224 00:57:04.014839  156119 round_trippers.go:580]     Audit-Id: da8b48bc-15a0-4f51-a3fb-fa5179cd269a
	I0224 00:57:04.014852  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:04.014861  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:04.014893  156119 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ccf3ec87-77f6-42ea-8caa-6941529dafd4","resourceVersion":"354","creationTimestamp":"2023-02-24T00:56:50Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0224 00:57:04.015331  156119 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ccf3ec87-77f6-42ea-8caa-6941529dafd4","resourceVersion":"354","creationTimestamp":"2023-02-24T00:56:50Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0224 00:57:04.015370  156119 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 00:57:04.015375  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:04.015381  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:04.015387  156119 round_trippers.go:473]     Content-Type: application/json
	I0224 00:57:04.015393  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:04.021473  156119 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0224 00:57:04.021492  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:04.021500  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:04.021506  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:04.021511  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:04.021516  156119 round_trippers.go:580]     Content-Length: 291
	I0224 00:57:04.021521  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:04 GMT
	I0224 00:57:04.021527  156119 round_trippers.go:580]     Audit-Id: a5ce785d-eaf4-47d3-899b-884496bf15bc
	I0224 00:57:04.021533  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:04.021550  156119 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ccf3ec87-77f6-42ea-8caa-6941529dafd4","resourceVersion":"355","creationTimestamp":"2023-02-24T00:56:50Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0224 00:57:04.095148  156119 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:57:04.095363  156119 kapi.go:59] client config for multinode-461512: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 00:57:04.095631  156119 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0224 00:57:04.095637  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:04.095644  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:04.095652  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:04.100206  156119 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 00:57:04.098443  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:04.101631  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:04.101646  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:04.101656  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:04.101669  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:04.101679  156119 round_trippers.go:580]     Content-Length: 109
	I0224 00:57:04.101695  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:04 GMT
	I0224 00:57:04.101705  156119 round_trippers.go:580]     Audit-Id: 17d756db-f91a-416a-a957-67dd9a9e7055
	I0224 00:57:04.101718  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:04.101747  156119 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 00:57:04.101770  156119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0224 00:57:04.101821  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:57:04.101750  156119 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"364"},"items":[]}
	I0224 00:57:04.102214  156119 addons.go:227] Setting addon default-storageclass=true in "multinode-461512"
	I0224 00:57:04.102247  156119 host.go:66] Checking if "multinode-461512" exists ...
	I0224 00:57:04.102543  156119 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Status}}
	I0224 00:57:04.173040  156119 command_runner.go:130] > apiVersion: v1
	I0224 00:57:04.173066  156119 command_runner.go:130] > data:
	I0224 00:57:04.173092  156119 command_runner.go:130] >   Corefile: |
	I0224 00:57:04.173101  156119 command_runner.go:130] >     .:53 {
	I0224 00:57:04.173114  156119 command_runner.go:130] >         errors
	I0224 00:57:04.173122  156119 command_runner.go:130] >         health {
	I0224 00:57:04.173130  156119 command_runner.go:130] >            lameduck 5s
	I0224 00:57:04.173137  156119 command_runner.go:130] >         }
	I0224 00:57:04.173144  156119 command_runner.go:130] >         ready
	I0224 00:57:04.173153  156119 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0224 00:57:04.173165  156119 command_runner.go:130] >            pods insecure
	I0224 00:57:04.173173  156119 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0224 00:57:04.173183  156119 command_runner.go:130] >            ttl 30
	I0224 00:57:04.173190  156119 command_runner.go:130] >         }
	I0224 00:57:04.173197  156119 command_runner.go:130] >         prometheus :9153
	I0224 00:57:04.173205  156119 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0224 00:57:04.173213  156119 command_runner.go:130] >            max_concurrent 1000
	I0224 00:57:04.173219  156119 command_runner.go:130] >         }
	I0224 00:57:04.173226  156119 command_runner.go:130] >         cache 30
	I0224 00:57:04.173233  156119 command_runner.go:130] >         loop
	I0224 00:57:04.173239  156119 command_runner.go:130] >         reload
	I0224 00:57:04.173246  156119 command_runner.go:130] >         loadbalance
	I0224 00:57:04.173251  156119 command_runner.go:130] >     }
	I0224 00:57:04.173257  156119 command_runner.go:130] > kind: ConfigMap
	I0224 00:57:04.173264  156119 command_runner.go:130] > metadata:
	I0224 00:57:04.173276  156119 command_runner.go:130] >   creationTimestamp: "2023-02-24T00:56:50Z"
	I0224 00:57:04.173282  156119 command_runner.go:130] >   name: coredns
	I0224 00:57:04.173290  156119 command_runner.go:130] >   namespace: kube-system
	I0224 00:57:04.173296  156119 command_runner.go:130] >   resourceVersion: "233"
	I0224 00:57:04.173304  156119 command_runner.go:130] >   uid: 7fe2b65d-0034-4b86-8324-3680843f0957
	I0224 00:57:04.173501  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0224 00:57:04.231198  156119 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0224 00:57:04.231220  156119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0224 00:57:04.231262  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:57:04.234028  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:57:04.307570  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:57:04.448740  156119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 00:57:04.467408  156119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0224 00:57:04.522600  156119 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 00:57:04.522618  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:04.522626  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:04.522632  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:04.550666  156119 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0224 00:57:04.550692  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:04.550702  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:04 GMT
	I0224 00:57:04.550711  156119 round_trippers.go:580]     Audit-Id: e1198285-4281-4004-8a55-ba3728334db4
	I0224 00:57:04.550719  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:04.550728  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:04.550736  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:04.550744  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:04.550760  156119 round_trippers.go:580]     Content-Length: 291
	I0224 00:57:04.550791  156119 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ccf3ec87-77f6-42ea-8caa-6941529dafd4","resourceVersion":"364","creationTimestamp":"2023-02-24T00:56:50Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0224 00:57:04.550907  156119 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-461512" context rescaled to 1 replicas
	I0224 00:57:04.550938  156119 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 00:57:04.553823  156119 out.go:177] * Verifying Kubernetes components...
	I0224 00:57:04.555419  156119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 00:57:04.657342  156119 command_runner.go:130] > configmap/coredns replaced
	I0224 00:57:04.661992  156119 start.go:921] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0224 00:57:05.365630  156119 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0224 00:57:05.365715  156119 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0224 00:57:05.365737  156119 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0224 00:57:05.365757  156119 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0224 00:57:05.365785  156119 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0224 00:57:05.365812  156119 command_runner.go:130] > pod/storage-provisioner created
	I0224 00:57:05.365893  156119 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0224 00:57:05.367825  156119 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0224 00:57:05.366490  156119 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:57:05.369275  156119 addons.go:492] enable addons completed in 1.373630033s: enabled=[storage-provisioner default-storageclass]
	I0224 00:57:05.369486  156119 kapi.go:59] client config for multinode-461512: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 00:57:05.369704  156119 node_ready.go:35] waiting up to 6m0s for node "multinode-461512" to be "Ready" ...
	I0224 00:57:05.369755  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:05.369761  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:05.369769  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:05.369777  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:05.371500  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:05.371521  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:05.371530  156119 round_trippers.go:580]     Audit-Id: 327d98f5-d198-48ad-8f2b-a22ca674e747
	I0224 00:57:05.371539  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:05.371557  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:05.371573  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:05.371581  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:05.371595  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:05 GMT
	I0224 00:57:05.371694  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:05.372371  156119 node_ready.go:49] node "multinode-461512" has status "Ready":"True"
	I0224 00:57:05.372387  156119 node_ready.go:38] duration metric: took 2.669497ms waiting for node "multinode-461512" to be "Ready" ...
	I0224 00:57:05.372396  156119 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 00:57:05.372462  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0224 00:57:05.372472  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:05.372484  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:05.372496  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:05.375520  156119 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 00:57:05.375534  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:05.375540  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:05 GMT
	I0224 00:57:05.375546  156119 round_trippers.go:580]     Audit-Id: d51323f6-269f-4730-9b3a-6748ce95ebd6
	I0224 00:57:05.375551  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:05.375557  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:05.375562  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:05.375568  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:05.375875  156119 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"381"},"items":[{"metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"357","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2e
f201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 60467 chars]
	I0224 00:57:05.379460  156119 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:05.379560  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:05.379589  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:05.379617  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:05.379634  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:05.449393  156119 round_trippers.go:574] Response Status: 200 OK in 69 milliseconds
	I0224 00:57:05.449426  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:05.449436  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:05.449446  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:05.449460  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:05 GMT
	I0224 00:57:05.449474  156119 round_trippers.go:580]     Audit-Id: 59906891-2606-4784-beb9-1b83db7e30c1
	I0224 00:57:05.449492  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:05.449506  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:05.449638  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"357","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0224 00:57:05.450220  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:05.450270  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:05.450291  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:05.450311  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:05.452259  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:05.452283  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:05.452292  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:05.452320  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:05.452335  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:05.452350  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:05 GMT
	I0224 00:57:05.452364  156119 round_trippers.go:580]     Audit-Id: 5cf93540-2541-4c5c-9d68-40947dde9727
	I0224 00:57:05.452392  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:05.452560  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:05.953722  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:05.953749  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:05.953762  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:05.953771  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:05.956166  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:05.956190  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:05.956199  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:05 GMT
	I0224 00:57:05.956205  156119 round_trippers.go:580]     Audit-Id: 1b968c3b-85d7-4530-aeb7-eaca81036baf
	I0224 00:57:05.956211  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:05.956220  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:05.956231  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:05.956241  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:05.956333  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"357","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0224 00:57:05.956738  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:05.956748  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:05.956755  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:05.956761  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:05.958635  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:05.958652  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:05.958659  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:05.958664  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:05.958670  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:05.958676  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:05.958693  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:05 GMT
	I0224 00:57:05.958702  156119 round_trippers.go:580]     Audit-Id: a29a3197-0e31-4f4e-b593-b4e8a01e5316
	I0224 00:57:05.958795  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:06.454027  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:06.454049  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:06.454080  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:06.454089  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:06.457335  156119 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 00:57:06.457398  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:06.457421  156119 round_trippers.go:580]     Audit-Id: 8e0f50c7-1d0d-4309-8ab7-65bb946f9f6a
	I0224 00:57:06.457441  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:06.457465  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:06.457475  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:06.457485  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:06.457513  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:06 GMT
	I0224 00:57:06.457632  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"357","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0224 00:57:06.458247  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:06.458262  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:06.458274  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:06.458283  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:06.460054  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:06.460075  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:06.460085  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:06.460094  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:06.460103  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:06.460119  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:06 GMT
	I0224 00:57:06.460127  156119 round_trippers.go:580]     Audit-Id: 096acd8f-c7eb-4484-a660-37f04ab7ca8d
	I0224 00:57:06.460139  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:06.460252  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:06.953075  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:06.953093  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:06.953101  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:06.953108  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:06.954834  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:06.954853  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:06.954860  156119 round_trippers.go:580]     Audit-Id: d41074bd-aa8c-43bc-b383-7f3d6e27b665
	I0224 00:57:06.954867  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:06.954872  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:06.954877  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:06.954883  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:06.954889  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:06 GMT
	I0224 00:57:06.954996  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:06.955448  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:06.955463  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:06.955476  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:06.955491  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:06.957250  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:06.957273  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:06.957283  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:06.957293  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:06.957302  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:06.957312  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:06 GMT
	I0224 00:57:06.957324  156119 round_trippers.go:580]     Audit-Id: f9ca1c24-f027-463b-8cb9-bfcd6eca4fb0
	I0224 00:57:06.957334  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:06.957446  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:07.453059  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:07.453079  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:07.453087  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:07.453093  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:07.454891  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:07.454916  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:07.454924  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:07.454930  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:07.454939  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:07.454947  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:07.454960  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:07 GMT
	I0224 00:57:07.454970  156119 round_trippers.go:580]     Audit-Id: fe46c642-0845-48c6-bb77-26600ead4367
	I0224 00:57:07.455063  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:07.455586  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:07.455601  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:07.455613  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:07.455622  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:07.457140  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:07.457156  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:07.457163  156119 round_trippers.go:580]     Audit-Id: f3c372b4-9a75-4144-a15f-f52892ef7bc4
	I0224 00:57:07.457169  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:07.457176  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:07.457186  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:07.457196  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:07.457205  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:07 GMT
	I0224 00:57:07.457367  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:07.457667  156119 pod_ready.go:102] pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace has status "Ready":"False"
	I0224 00:57:07.954022  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:07.954044  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:07.954056  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:07.954082  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:07.955933  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:07.955957  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:07.955967  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:07.955977  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:07.955986  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:07.955995  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:07 GMT
	I0224 00:57:07.956006  156119 round_trippers.go:580]     Audit-Id: 41ed5086-9afc-4916-a3e1-44992d32fc6a
	I0224 00:57:07.956014  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:07.956162  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:07.956734  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:07.956748  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:07.956755  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:07.956761  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:07.958497  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:07.958514  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:07.958521  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:07.958527  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:07 GMT
	I0224 00:57:07.958533  156119 round_trippers.go:580]     Audit-Id: eab9681e-a99d-4ecf-bff6-6f44beb4097c
	I0224 00:57:07.958540  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:07.958548  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:07.958559  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:07.958656  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:08.453330  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:08.453352  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:08.453361  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:08.453368  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:08.455431  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:08.455451  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:08.455458  156119 round_trippers.go:580]     Audit-Id: 0d4533fd-ab86-475e-9909-a672a5af3d30
	I0224 00:57:08.455464  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:08.455469  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:08.455474  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:08.455483  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:08.455491  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:08 GMT
	I0224 00:57:08.455629  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:08.456123  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:08.456135  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:08.456142  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:08.456149  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:08.457627  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:08.457644  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:08.457650  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:08 GMT
	I0224 00:57:08.457656  156119 round_trippers.go:580]     Audit-Id: 90166fdf-366e-4886-a91f-21f9602e3879
	I0224 00:57:08.457662  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:08.457676  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:08.457684  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:08.457696  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:08.457813  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:08.953355  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:08.953375  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:08.953383  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:08.953389  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:08.955394  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:08.955418  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:08.955427  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:08 GMT
	I0224 00:57:08.955439  156119 round_trippers.go:580]     Audit-Id: 39ced71f-62d9-4a6c-8428-1c3c1396f33d
	I0224 00:57:08.955448  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:08.955457  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:08.955465  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:08.955479  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:08.955567  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:08.956038  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:08.956053  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:08.956063  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:08.956072  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:08.957695  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:08.957712  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:08.957722  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:08 GMT
	I0224 00:57:08.957732  156119 round_trippers.go:580]     Audit-Id: e2601c7b-9631-4702-aed0-d430378ff3c7
	I0224 00:57:08.957745  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:08.957751  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:08.957758  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:08.957764  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:08.957880  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:09.453369  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:09.453390  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:09.453403  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:09.453411  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:09.455436  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:09.455464  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:09.455476  156119 round_trippers.go:580]     Audit-Id: b51be5eb-29c5-489e-855e-afa50317332f
	I0224 00:57:09.455484  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:09.455491  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:09.455500  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:09.455515  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:09.455525  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:09 GMT
	I0224 00:57:09.455656  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:09.456227  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:09.456244  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:09.456253  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:09.456262  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:09.457714  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:09.457735  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:09.457744  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:09.457752  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:09 GMT
	I0224 00:57:09.457763  156119 round_trippers.go:580]     Audit-Id: a1883485-24fc-4eec-8e11-54351a2bcca8
	I0224 00:57:09.457772  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:09.457780  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:09.457790  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:09.457920  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:09.458304  156119 pod_ready.go:102] pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace has status "Ready":"False"
	I0224 00:57:09.953260  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:09.953300  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:09.953346  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:09.953356  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:09.955493  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:09.955515  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:09.955525  156119 round_trippers.go:580]     Audit-Id: 58c5585f-1de3-4dab-89b1-8079c6dbbdc0
	I0224 00:57:09.955531  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:09.955536  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:09.955542  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:09.955547  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:09.955553  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:09 GMT
	I0224 00:57:09.955642  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:09.956174  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:09.956194  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:09.956205  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:09.956214  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:09.957827  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:09.957845  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:09.957854  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:09.957862  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:09.957870  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:09.957879  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:09.957888  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:09 GMT
	I0224 00:57:09.957898  156119 round_trippers.go:580]     Audit-Id: 7cc230b0-4b42-43cd-bb83-06108a39273a
	I0224 00:57:09.957992  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:10.453534  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:10.453554  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:10.453562  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:10.453569  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:10.455781  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:10.455804  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:10.455815  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:10 GMT
	I0224 00:57:10.455822  156119 round_trippers.go:580]     Audit-Id: 0109268f-f1c9-4475-bb53-6c032bdca083
	I0224 00:57:10.455830  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:10.455842  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:10.455866  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:10.455878  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:10.455977  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:10.456485  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:10.456499  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:10.456510  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:10.456518  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:10.458118  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:10.458139  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:10.458150  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:10 GMT
	I0224 00:57:10.458158  156119 round_trippers.go:580]     Audit-Id: b35a3269-dce3-4b70-8680-770782bbd264
	I0224 00:57:10.458166  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:10.458176  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:10.458189  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:10.458199  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:10.458313  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:10.953981  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:10.954008  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:10.954021  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:10.954031  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:10.956500  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:10.956525  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:10.956535  156119 round_trippers.go:580]     Audit-Id: e824ed4b-8ed8-4feb-8fff-35594d2ea94a
	I0224 00:57:10.956543  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:10.956551  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:10.956559  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:10.956569  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:10.956577  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:10 GMT
	I0224 00:57:10.956689  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:10.957245  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:10.957261  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:10.957273  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:10.957282  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:10.959225  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:10.959250  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:10.959260  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:10.959272  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:10 GMT
	I0224 00:57:10.959286  156119 round_trippers.go:580]     Audit-Id: 56da9001-b0fe-4d34-9e44-3e94b13abbf4
	I0224 00:57:10.959295  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:10.959309  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:10.959319  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:10.959482  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:11.453072  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:11.453091  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:11.453099  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:11.453105  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:11.456655  156119 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 00:57:11.456678  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:11.456688  156119 round_trippers.go:580]     Audit-Id: 74c1c589-92bc-40d6-b9f9-83c125ad06ef
	I0224 00:57:11.456697  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:11.456706  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:11.456715  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:11.456724  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:11.456733  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:11 GMT
	I0224 00:57:11.456848  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:11.457430  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:11.457443  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:11.457455  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:11.457465  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:11.459551  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:11.459572  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:11.459582  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:11.459591  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:11.459609  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:11.459623  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:11.459644  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:11 GMT
	I0224 00:57:11.459652  156119 round_trippers.go:580]     Audit-Id: bcd61aa2-3aca-4122-8931-6a4d656927fc
	I0224 00:57:11.459769  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:11.460148  156119 pod_ready.go:102] pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace has status "Ready":"False"
	I0224 00:57:11.953478  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:11.953550  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:11.953583  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:11.953623  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:11.956110  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:11.956129  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:11.956137  156119 round_trippers.go:580]     Audit-Id: 749813bf-cb7f-4fdd-bf3f-ee531176b82d
	I0224 00:57:11.956146  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:11.956155  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:11.956167  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:11.956178  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:11.956190  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:11 GMT
	I0224 00:57:11.956323  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:11.956945  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:11.956969  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:11.956982  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:11.956992  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:11.958976  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:11.958992  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:11.959002  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:11.959011  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:11.959020  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:11 GMT
	I0224 00:57:11.959038  156119 round_trippers.go:580]     Audit-Id: e51b67ee-35fa-48ec-94be-e8e562a0c6a5
	I0224 00:57:11.959046  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:11.959054  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:11.959182  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:12.453907  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:12.453932  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:12.453944  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:12.453955  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:12.456269  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:12.456293  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:12.456313  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:12.456322  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:12.456330  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:12.456343  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:12.456354  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:12 GMT
	I0224 00:57:12.456363  156119 round_trippers.go:580]     Audit-Id: b091047b-a0cc-44ca-a39e-f769a199843b
	I0224 00:57:12.456468  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:12.456927  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:12.456940  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:12.456948  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:12.456954  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:12.458795  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:12.458817  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:12.458828  156119 round_trippers.go:580]     Audit-Id: 41c179b8-64d3-430c-a047-a283b4acbc5e
	I0224 00:57:12.458838  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:12.458847  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:12.458858  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:12.458871  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:12.458886  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:12 GMT
	I0224 00:57:12.459012  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:12.953194  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:12.953214  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:12.953225  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:12.953233  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:12.955523  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:12.955548  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:12.955559  156119 round_trippers.go:580]     Audit-Id: 8525e674-8e1e-42f5-bfc6-e8b3cab6a176
	I0224 00:57:12.955568  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:12.955576  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:12.955586  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:12.955598  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:12.955610  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:12 GMT
	I0224 00:57:12.955735  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:12.956273  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:12.956326  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:12.956348  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:12.956367  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:12.958382  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:12.958404  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:12.958414  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:12.958424  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:12.958433  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:12.958441  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:12 GMT
	I0224 00:57:12.958451  156119 round_trippers.go:580]     Audit-Id: 71a61bb0-c883-4497-a045-7361aedae0bc
	I0224 00:57:12.958487  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:12.958617  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:13.453118  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:13.453143  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:13.453156  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:13.453166  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:13.455439  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:13.455462  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:13.455473  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:13.455482  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:13 GMT
	I0224 00:57:13.455490  156119 round_trippers.go:580]     Audit-Id: b2f058e9-1627-4e5d-b58c-2820bfe7d73d
	I0224 00:57:13.455498  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:13.455510  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:13.455518  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:13.455633  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:13.456067  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:13.456078  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:13.456085  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:13.456091  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:13.458192  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:13.458211  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:13.458220  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:13.458230  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:13.458238  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:13 GMT
	I0224 00:57:13.458248  156119 round_trippers.go:580]     Audit-Id: 950e4308-6adb-4145-9538-f063164a5892
	I0224 00:57:13.458261  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:13.458273  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:13.458387  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:13.954032  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:13.954055  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:13.954090  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:13.954101  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:13.956464  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:13.956487  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:13.956498  156119 round_trippers.go:580]     Audit-Id: b556084c-91dc-464e-8969-7c6e774ad6f0
	I0224 00:57:13.956508  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:13.956516  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:13.956528  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:13.956540  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:13.956551  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:13 GMT
	I0224 00:57:13.956669  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:13.957226  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:13.957244  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:13.957256  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:13.957266  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:13.959299  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:13.959321  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:13.959330  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:13.959339  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:13.959347  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:13.959363  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:13 GMT
	I0224 00:57:13.959374  156119 round_trippers.go:580]     Audit-Id: e9386e4c-3f42-48d6-873e-78b66c96357d
	I0224 00:57:13.959386  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:13.959500  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:13.959800  156119 pod_ready.go:102] pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace has status "Ready":"False"
	I0224 00:57:14.453140  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:14.453164  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:14.453181  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:14.453192  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:14.455466  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:14.455486  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:14.455495  156119 round_trippers.go:580]     Audit-Id: bcc42f5c-a6fd-4e1b-8bda-4cb758dc51cf
	I0224 00:57:14.455505  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:14.455513  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:14.455520  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:14.455528  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:14.455547  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:14 GMT
	I0224 00:57:14.455702  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:14.456266  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:14.456283  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:14.456293  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:14.456302  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:14.459116  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:14.459141  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:14.459151  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:14.459161  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:14.459169  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:14 GMT
	I0224 00:57:14.459183  156119 round_trippers.go:580]     Audit-Id: 3c917c2c-dca5-4b1e-b07a-082a1835d89c
	I0224 00:57:14.459196  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:14.459208  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:14.459363  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:14.953933  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:14.953957  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:14.953968  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:14.953976  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:14.956522  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:14.956545  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:14.956554  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:14.956563  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:14.956572  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:14 GMT
	I0224 00:57:14.956584  156119 round_trippers.go:580]     Audit-Id: 6baa1850-317a-4c4f-8076-6e678e2fefd8
	I0224 00:57:14.956598  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:14.956607  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:14.956726  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:14.957311  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:14.957334  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:14.957345  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:14.957355  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:14.959332  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:14.959353  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:14.959363  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:14.959371  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:14.959380  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:14 GMT
	I0224 00:57:14.959390  156119 round_trippers.go:580]     Audit-Id: 94755ac3-f783-4793-b1e5-a9344ac31ec6
	I0224 00:57:14.959429  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:14.959442  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:14.959551  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:15.453634  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:15.453661  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:15.453674  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:15.453685  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:15.456276  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:15.456301  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:15.456313  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:15.456324  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:15.456332  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:15.456353  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:15 GMT
	I0224 00:57:15.456362  156119 round_trippers.go:580]     Audit-Id: af65e0ce-7f6c-489b-bf4e-7a40233a96d3
	I0224 00:57:15.456375  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:15.456507  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:15.457091  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:15.457111  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:15.457123  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:15.457133  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:15.459144  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:15.459163  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:15.459173  156119 round_trippers.go:580]     Audit-Id: 4ef12c62-20b9-42de-a90e-132b016f3e8b
	I0224 00:57:15.459182  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:15.459191  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:15.459200  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:15.459207  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:15.459216  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:15 GMT
	I0224 00:57:15.459333  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:15.953577  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:15.953601  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:15.953610  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:15.953620  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:15.956347  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:15.956372  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:15.956381  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:15 GMT
	I0224 00:57:15.956390  156119 round_trippers.go:580]     Audit-Id: 66e9356a-bf22-4e84-922f-a00973814444
	I0224 00:57:15.956399  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:15.956408  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:15.956430  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:15.956446  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:15.956595  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:15.957151  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:15.957171  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:15.957183  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:15.957194  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:15.958961  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:15.958981  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:15.958991  156119 round_trippers.go:580]     Audit-Id: 1d2319af-f646-4434-8306-063edd2d4ffc
	I0224 00:57:15.959001  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:15.959015  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:15.959024  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:15.959032  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:15.959044  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:15 GMT
	I0224 00:57:15.959157  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:16.453440  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:16.453459  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:16.453467  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:16.453474  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:16.455785  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:16.455806  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:16.455815  156119 round_trippers.go:580]     Audit-Id: 0e887c87-349f-4bde-8776-930bfa586a03
	I0224 00:57:16.455824  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:16.455833  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:16.455842  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:16.455854  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:16.455875  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:16 GMT
	I0224 00:57:16.455981  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:16.456513  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:16.456526  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:16.456536  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:16.456545  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:16.459586  156119 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 00:57:16.459605  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:16.459615  156119 round_trippers.go:580]     Audit-Id: efc0c02d-4ebf-44bd-88f3-59702d6edfc0
	I0224 00:57:16.459623  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:16.459632  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:16.459645  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:16.459655  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:16.459670  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:16 GMT
	I0224 00:57:16.459778  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:16.460062  156119 pod_ready.go:102] pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace has status "Ready":"False"
	I0224 00:57:16.953590  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:16.953613  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:16.953622  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:16.953628  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:16.955899  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:16.955967  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:16.955987  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:16.956005  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:16 GMT
	I0224 00:57:16.956038  156119 round_trippers.go:580]     Audit-Id: 7770f169-8a87-43b7-af94-527141d2ce91
	I0224 00:57:16.956059  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:16.956076  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:16.956092  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:16.956620  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:16.957170  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:16.957218  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:16.957235  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:16.957245  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:16.959246  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:16.959267  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:16.959278  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:16.959287  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:16.959298  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:16 GMT
	I0224 00:57:16.959311  156119 round_trippers.go:580]     Audit-Id: 88ab6be6-09f7-4b3d-90b7-a5f4979ec682
	I0224 00:57:16.959321  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:16.959332  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:16.959520  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:17.454124  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:17.454154  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:17.454167  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:17.454178  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:17.456390  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:17.456414  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:17.456424  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:17.456433  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:17.456442  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:17.456454  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:17 GMT
	I0224 00:57:17.456463  156119 round_trippers.go:580]     Audit-Id: ccd62a61-7dc2-4af2-8b28-9a129fdec264
	I0224 00:57:17.456473  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:17.456582  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:17.457135  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:17.457147  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:17.457158  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:17.457168  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:17.459106  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:17.459127  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:17.459136  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:17.459144  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:17.459152  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:17.459162  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:17.459173  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:17 GMT
	I0224 00:57:17.459181  156119 round_trippers.go:580]     Audit-Id: d737f9d3-6d3f-43c2-9105-1bc36798607b
	I0224 00:57:17.459291  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:17.953984  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:17.954007  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:17.954018  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:17.954024  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:17.956274  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:17.956295  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:17.956304  156119 round_trippers.go:580]     Audit-Id: 2def9cfe-c66f-4b3b-9fdd-072622ced7ef
	I0224 00:57:17.956313  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:17.956321  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:17.956337  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:17.956346  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:17.956356  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:17 GMT
	I0224 00:57:17.956523  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:17.957067  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:17.957082  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:17.957092  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:17.957103  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:17.958803  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:17.958823  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:17.958832  156119 round_trippers.go:580]     Audit-Id: 0b77223c-ff2e-4c56-8d53-98233cf04262
	I0224 00:57:17.958840  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:17.958849  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:17.958860  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:17.958871  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:17.958883  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:17 GMT
	I0224 00:57:17.959016  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:18.453684  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:18.453708  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:18.453720  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:18.453731  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:18.456025  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:18.456051  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:18.456061  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:18.456069  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:18.456078  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:18.456087  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:18.456102  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:18 GMT
	I0224 00:57:18.456109  156119 round_trippers.go:580]     Audit-Id: 80dea813-20b4-4081-9e9a-0fa8968fc217
	I0224 00:57:18.456225  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:18.456653  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:18.456667  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:18.456676  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:18.456685  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:18.458643  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:18.458664  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:18.458674  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:18.458682  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:18.458691  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:18.458703  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:18.458715  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:18 GMT
	I0224 00:57:18.458726  156119 round_trippers.go:580]     Audit-Id: de267d16-690a-4304-8a9b-08d52cdc8a43
	I0224 00:57:18.458839  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:18.953454  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:18.953477  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:18.953489  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:18.953504  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:18.955900  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:18.955920  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:18.955929  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:18 GMT
	I0224 00:57:18.955938  156119 round_trippers.go:580]     Audit-Id: a6f30687-aec1-4019-9257-87f017c9d840
	I0224 00:57:18.955948  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:18.955962  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:18.955972  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:18.955981  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:18.956101  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:18.956581  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:18.956593  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:18.956600  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:18.956606  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:18.958358  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:18.958377  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:18.958388  156119 round_trippers.go:580]     Audit-Id: 31b6c82c-57f5-4409-94fd-e14781b76fca
	I0224 00:57:18.958398  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:18.958408  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:18.958420  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:18.958433  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:18.958442  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:18 GMT
	I0224 00:57:18.958556  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:18.958980  156119 pod_ready.go:102] pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace has status "Ready":"False"
	I0224 00:57:19.453127  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:19.453153  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:19.453163  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:19.453170  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:19.454911  156119 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0224 00:57:19.454943  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:19.454953  156119 round_trippers.go:580]     Audit-Id: 55fb85b8-43bb-4bdf-a24e-2c53cc59bd49
	I0224 00:57:19.454962  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:19.454973  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:19.454984  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:19.454996  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:19.455007  156119 round_trippers.go:580]     Content-Length: 216
	I0224 00:57:19.455018  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:19 GMT
	I0224 00:57:19.455048  156119 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-9ws7r\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-9ws7r","kind":"pods"},"code":404}
	I0224 00:57:19.455267  156119 pod_ready.go:97] error getting pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-9ws7r" not found
	I0224 00:57:19.455292  156119 pod_ready.go:81] duration metric: took 14.075778884s waiting for pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace to be "Ready" ...
	E0224 00:57:19.455307  156119 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-9ws7r" not found
	I0224 00:57:19.455322  156119 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-r6m7z" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:19.455383  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-r6m7z
	I0224 00:57:19.455394  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:19.455404  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:19.455415  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:19.457963  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:19.457983  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:19.457993  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:19.458007  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:19.458017  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:19.458030  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:19 GMT
	I0224 00:57:19.458039  156119 round_trippers.go:580]     Audit-Id: 9953caaf-fe6a-42df-a8d7-43f5756e281d
	I0224 00:57:19.458050  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:19.458179  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-r6m7z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8c8eb92c-c99a-4eea-8518-bd2bac5df023","resourceVersion":"406","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 00:57:19.458668  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:19.458682  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:19.458689  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:19.458695  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:19.460106  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:19.460124  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:19.460134  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:19.460140  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:19 GMT
	I0224 00:57:19.460149  156119 round_trippers.go:580]     Audit-Id: ec911c26-4334-44c8-869e-df2b63401210
	I0224 00:57:19.460159  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:19.460172  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:19.460184  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:19.460290  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:19.960915  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-r6m7z
	I0224 00:57:19.960935  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:19.960943  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:19.960950  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:19.962952  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:19.962984  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:19.962992  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:19.962998  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:19.963003  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:19.963009  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:19 GMT
	I0224 00:57:19.963014  156119 round_trippers.go:580]     Audit-Id: 50f08eb7-5ee1-411e-98d1-fc3376c2b760
	I0224 00:57:19.963019  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:19.963117  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-r6m7z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8c8eb92c-c99a-4eea-8518-bd2bac5df023","resourceVersion":"406","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 00:57:19.963565  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:19.963579  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:19.963586  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:19.963592  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:19.965187  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:19.965209  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:19.965222  156119 round_trippers.go:580]     Audit-Id: 8b1a7939-cbe2-4be6-921b-50808a4dd1f3
	I0224 00:57:19.965231  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:19.965239  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:19.965247  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:19.965260  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:19.965271  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:19 GMT
	I0224 00:57:19.965387  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:20.460898  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-r6m7z
	I0224 00:57:20.460917  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.460925  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.460932  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.462894  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.462912  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.462919  156119 round_trippers.go:580]     Audit-Id: a64fb2fd-9a90-4112-8d82-60f641dc06a0
	I0224 00:57:20.462925  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.462930  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.462938  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.462946  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.462954  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.463039  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-r6m7z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8c8eb92c-c99a-4eea-8518-bd2bac5df023","resourceVersion":"433","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0224 00:57:20.463484  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:20.463499  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.463506  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.463513  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.465178  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.465193  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.465203  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.465212  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.465224  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.465235  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.465243  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.465249  156119 round_trippers.go:580]     Audit-Id: 750aded8-6b14-48fc-9d3d-559b51d9f4a8
	I0224 00:57:20.465344  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:20.465622  156119 pod_ready.go:92] pod "coredns-787d4945fb-r6m7z" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:20.465643  156119 pod_ready.go:81] duration metric: took 1.010309087s waiting for pod "coredns-787d4945fb-r6m7z" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.465651  156119 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.465689  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-461512
	I0224 00:57:20.465696  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.465702  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.465711  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.467217  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.467233  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.467240  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.467246  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.467251  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.467257  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.467265  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.467276  156119 round_trippers.go:580]     Audit-Id: b70e0a44-ad01-45fc-b0c4-bdd1c866483f
	I0224 00:57:20.467391  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-461512","namespace":"kube-system","uid":"85634add-ee6f-426e-8dce-c5bd503ada85","resourceVersion":"279","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"755375775ca4908a1a35224e40dd8da8","kubernetes.io/config.mirror":"755375775ca4908a1a35224e40dd8da8","kubernetes.io/config.seen":"2023-02-24T00:56:50.894583011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0224 00:57:20.467726  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:20.467737  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.467744  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.467750  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.469177  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.469196  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.469206  156119 round_trippers.go:580]     Audit-Id: 00991078-94e8-4d4b-9997-20dd395be4a8
	I0224 00:57:20.469215  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.469223  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.469234  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.469245  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.469258  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.469358  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:20.469618  156119 pod_ready.go:92] pod "etcd-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:20.469629  156119 pod_ready.go:81] duration metric: took 3.970892ms waiting for pod "etcd-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.469641  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.469675  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-461512
	I0224 00:57:20.469682  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.469688  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.469694  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.471104  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.471127  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.471137  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.471146  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.471162  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.471171  156119 round_trippers.go:580]     Audit-Id: 00f81bf6-077f-4272-ad5d-34e595caecf2
	I0224 00:57:20.471183  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.471195  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.471303  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-461512","namespace":"kube-system","uid":"915d077c-7a17-4c95-9199-8146800a171b","resourceVersion":"382","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"4c6cb11c2c301f276f12bb7545f0af61","kubernetes.io/config.mirror":"4c6cb11c2c301f276f12bb7545f0af61","kubernetes.io/config.seen":"2023-02-24T00:56:50.894613111Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0224 00:57:20.471667  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:20.471679  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.471685  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.471692  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.472935  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.472951  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.472960  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.472968  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.472976  156119 round_trippers.go:580]     Audit-Id: a85d0139-2ebc-4a3d-87c2-c760977905be
	I0224 00:57:20.472988  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.473002  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.473015  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.473118  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:20.473376  156119 pod_ready.go:92] pod "kube-apiserver-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:20.473387  156119 pod_ready.go:81] duration metric: took 3.740685ms waiting for pod "kube-apiserver-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.473395  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.473427  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-461512
	I0224 00:57:20.473434  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.473440  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.473451  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.474866  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.474884  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.474893  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.474902  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.474914  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.474923  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.474935  156119 round_trippers.go:580]     Audit-Id: e3ba6523-f36b-47a6-9780-847e53a3000e
	I0224 00:57:20.474947  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.475049  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-461512","namespace":"kube-system","uid":"8e426bcd-dab9-430d-b166-f7ab34013208","resourceVersion":"274","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c1d525744bc3189fa4b6ceed33e9b7b6","kubernetes.io/config.mirror":"c1d525744bc3189fa4b6ceed33e9b7b6","kubernetes.io/config.seen":"2023-02-24T00:56:50.894614692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0224 00:57:20.475355  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:20.475364  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.475371  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.475377  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.476548  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.476562  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.476569  156119 round_trippers.go:580]     Audit-Id: 6d5afb8d-1de1-4df0-a1c8-a0bccd3b815b
	I0224 00:57:20.476578  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.476593  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.476605  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.476618  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.476629  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.476691  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:20.476909  156119 pod_ready.go:92] pod "kube-controller-manager-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:20.476918  156119 pod_ready.go:81] duration metric: took 3.518277ms waiting for pod "kube-controller-manager-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.476924  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dvmbp" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.476954  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvmbp
	I0224 00:57:20.476961  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.476968  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.476974  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.478193  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.478211  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.478220  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.478229  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.478241  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.478252  156119 round_trippers.go:580]     Audit-Id: fee2a3c3-73ac-4117-a14c-0dc80a1c7e5b
	I0224 00:57:20.478263  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.478275  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.478360  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dvmbp","generateName":"kube-proxy-","namespace":"kube-system","uid":"e9e9bac2-7132-4b60-a535-80b6113e0e8d","resourceVersion":"392","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ac4eac56-21ca-4f1f-a0d6-df82bff382f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ac4eac56-21ca-4f1f-a0d6-df82bff382f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0224 00:57:20.478690  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:20.478702  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.478709  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.478715  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.479868  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.479883  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.479889  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.479895  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.479901  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.479910  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.479922  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.479931  156119 round_trippers.go:580]     Audit-Id: 4601d3e4-cc6d-4956-bf2b-277f6786a542
	I0224 00:57:20.480034  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:20.480276  156119 pod_ready.go:92] pod "kube-proxy-dvmbp" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:20.480289  156119 pod_ready.go:81] duration metric: took 3.359473ms waiting for pod "kube-proxy-dvmbp" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.480299  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.661666  156119 request.go:622] Waited for 181.310592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-461512
	I0224 00:57:20.661707  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-461512
	I0224 00:57:20.661711  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.661719  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.661728  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.663380  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.663399  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.663408  156119 round_trippers.go:580]     Audit-Id: c0e2ce02-f018-4a9a-bfa0-44745e4544fb
	I0224 00:57:20.663417  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.663427  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.663449  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.663462  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.663471  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.663553  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-461512","namespace":"kube-system","uid":"64f3ef30-ed87-42cc-b0e2-cd3c7c922383","resourceVersion":"280","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6d86c9f2cb44969723080e3b260936ff","kubernetes.io/config.mirror":"6d86c9f2cb44969723080e3b260936ff","kubernetes.io/config.seen":"2023-02-24T00:56:50.894615981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0224 00:57:20.861228  156119 request.go:622] Waited for 197.349951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:20.861288  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:20.861295  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.861304  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.861311  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.863076  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.863096  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.863105  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.863113  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.863121  156119 round_trippers.go:580]     Audit-Id: f64a2596-fe22-458d-afaf-5f8873e56ad1
	I0224 00:57:20.863130  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.863154  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.863166  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.863248  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:20.863541  156119 pod_ready.go:92] pod "kube-scheduler-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:20.863566  156119 pod_ready.go:81] duration metric: took 383.260366ms waiting for pod "kube-scheduler-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.863580  156119 pod_ready.go:38] duration metric: took 15.491172291s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 00:57:20.863608  156119 api_server.go:51] waiting for apiserver process to appear ...
	I0224 00:57:20.863654  156119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 00:57:20.872462  156119 command_runner.go:130] > 2043
	I0224 00:57:20.873050  156119 api_server.go:71] duration metric: took 16.322084904s to wait for apiserver process to appear ...
	I0224 00:57:20.873068  156119 api_server.go:87] waiting for apiserver healthz status ...
	I0224 00:57:20.873079  156119 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0224 00:57:20.876781  156119 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0224 00:57:20.876822  156119 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0224 00:57:20.876830  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.876838  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.876844  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.877498  156119 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0224 00:57:20.877513  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.877520  156119 round_trippers.go:580]     Content-Length: 263
	I0224 00:57:20.877525  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.877531  156119 round_trippers.go:580]     Audit-Id: 755f3a52-a8c5-4941-9a59-7e14cde38318
	I0224 00:57:20.877538  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.877550  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.877562  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.877574  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.877592  156119 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0224 00:57:20.877657  156119 api_server.go:140] control plane version: v1.26.1
	I0224 00:57:20.877671  156119 api_server.go:130] duration metric: took 4.597635ms to wait for apiserver health ...
	I0224 00:57:20.877679  156119 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 00:57:21.060995  156119 request.go:622] Waited for 183.255895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0224 00:57:21.061048  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0224 00:57:21.061053  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:21.061065  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:21.061072  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:21.064181  156119 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 00:57:21.064204  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:21.064211  156119 round_trippers.go:580]     Audit-Id: 0b5c1d7a-9759-461b-8451-9d12c1a71646
	I0224 00:57:21.064217  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:21.064222  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:21.064236  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:21.064244  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:21.064250  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:21 GMT
	I0224 00:57:21.064666  156119 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"coredns-787d4945fb-r6m7z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8c8eb92c-c99a-4eea-8518-bd2bac5df023","resourceVersion":"433","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0224 00:57:21.066423  156119 system_pods.go:59] 8 kube-system pods found
	I0224 00:57:21.066444  156119 system_pods.go:61] "coredns-787d4945fb-r6m7z" [8c8eb92c-c99a-4eea-8518-bd2bac5df023] Running
	I0224 00:57:21.066451  156119 system_pods.go:61] "etcd-multinode-461512" [85634add-ee6f-426e-8dce-c5bd503ada85] Running
	I0224 00:57:21.066462  156119 system_pods.go:61] "kindnet-5p4bl" [5b593525-bd00-43d2-8402-71e8fd30a4ef] Running
	I0224 00:57:21.066470  156119 system_pods.go:61] "kube-apiserver-multinode-461512" [915d077c-7a17-4c95-9199-8146800a171b] Running
	I0224 00:57:21.066481  156119 system_pods.go:61] "kube-controller-manager-multinode-461512" [8e426bcd-dab9-430d-b166-f7ab34013208] Running
	I0224 00:57:21.066488  156119 system_pods.go:61] "kube-proxy-dvmbp" [e9e9bac2-7132-4b60-a535-80b6113e0e8d] Running
	I0224 00:57:21.066493  156119 system_pods.go:61] "kube-scheduler-multinode-461512" [64f3ef30-ed87-42cc-b0e2-cd3c7c922383] Running
	I0224 00:57:21.066499  156119 system_pods.go:61] "storage-provisioner" [82115459-afa2-425c-a8bc-9da99885c6ae] Running
	I0224 00:57:21.066503  156119 system_pods.go:74] duration metric: took 188.820667ms to wait for pod list to return data ...
	I0224 00:57:21.066512  156119 default_sa.go:34] waiting for default service account to be created ...
	I0224 00:57:21.261977  156119 request.go:622] Waited for 195.394959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0224 00:57:21.262057  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0224 00:57:21.262089  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:21.262102  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:21.262113  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:21.264245  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:21.264272  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:21.264282  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:21.264288  156119 round_trippers.go:580]     Content-Length: 261
	I0224 00:57:21.264294  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:21 GMT
	I0224 00:57:21.264303  156119 round_trippers.go:580]     Audit-Id: c62a4440-1953-4578-b1ba-eb610c0bab2a
	I0224 00:57:21.264309  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:21.264317  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:21.264340  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:21.264371  156119 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f47f03da-df5a-4d85-b75b-77af9a8736c4","resourceVersion":"339","creationTimestamp":"2023-02-24T00:57:03Z"}}]}
	I0224 00:57:21.264571  156119 default_sa.go:45] found service account: "default"
	I0224 00:57:21.264588  156119 default_sa.go:55] duration metric: took 198.067894ms for default service account to be created ...
	I0224 00:57:21.264598  156119 system_pods.go:116] waiting for k8s-apps to be running ...
	I0224 00:57:21.460925  156119 request.go:622] Waited for 196.262808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0224 00:57:21.460986  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0224 00:57:21.460998  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:21.461006  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:21.461013  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:21.465441  156119 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 00:57:21.465462  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:21.465469  156119 round_trippers.go:580]     Audit-Id: b2c74b27-22a7-4901-8b8b-7d9af07e9f84
	I0224 00:57:21.465475  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:21.465482  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:21.465491  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:21.465503  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:21.465511  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:21 GMT
	I0224 00:57:21.465930  156119 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"coredns-787d4945fb-r6m7z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8c8eb92c-c99a-4eea-8518-bd2bac5df023","resourceVersion":"433","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0224 00:57:21.467576  156119 system_pods.go:86] 8 kube-system pods found
	I0224 00:57:21.467594  156119 system_pods.go:89] "coredns-787d4945fb-r6m7z" [8c8eb92c-c99a-4eea-8518-bd2bac5df023] Running
	I0224 00:57:21.467599  156119 system_pods.go:89] "etcd-multinode-461512" [85634add-ee6f-426e-8dce-c5bd503ada85] Running
	I0224 00:57:21.467603  156119 system_pods.go:89] "kindnet-5p4bl" [5b593525-bd00-43d2-8402-71e8fd30a4ef] Running
	I0224 00:57:21.467607  156119 system_pods.go:89] "kube-apiserver-multinode-461512" [915d077c-7a17-4c95-9199-8146800a171b] Running
	I0224 00:57:21.467613  156119 system_pods.go:89] "kube-controller-manager-multinode-461512" [8e426bcd-dab9-430d-b166-f7ab34013208] Running
	I0224 00:57:21.467619  156119 system_pods.go:89] "kube-proxy-dvmbp" [e9e9bac2-7132-4b60-a535-80b6113e0e8d] Running
	I0224 00:57:21.467630  156119 system_pods.go:89] "kube-scheduler-multinode-461512" [64f3ef30-ed87-42cc-b0e2-cd3c7c922383] Running
	I0224 00:57:21.467636  156119 system_pods.go:89] "storage-provisioner" [82115459-afa2-425c-a8bc-9da99885c6ae] Running
	I0224 00:57:21.467642  156119 system_pods.go:126] duration metric: took 203.038059ms to wait for k8s-apps to be running ...
	I0224 00:57:21.467650  156119 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 00:57:21.467688  156119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 00:57:21.477136  156119 system_svc.go:56] duration metric: took 9.480308ms WaitForService to wait for kubelet.
	I0224 00:57:21.477158  156119 kubeadm.go:578] duration metric: took 16.926191742s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0224 00:57:21.477179  156119 node_conditions.go:102] verifying NodePressure condition ...
	I0224 00:57:21.661570  156119 request.go:622] Waited for 184.324772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0224 00:57:21.661617  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0224 00:57:21.661622  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:21.661629  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:21.661637  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:21.663630  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:21.663648  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:21.663655  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:21.663661  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:21.663667  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:21 GMT
	I0224 00:57:21.663672  156119 round_trippers.go:580]     Audit-Id: ff45d116-a5ff-4ef3-83b9-4f576977529e
	I0224 00:57:21.663678  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:21.663684  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:21.663765  156119 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5052 chars]
	I0224 00:57:21.664572  156119 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0224 00:57:21.664599  156119 node_conditions.go:123] node cpu capacity is 8
	I0224 00:57:21.664611  156119 node_conditions.go:105] duration metric: took 187.427166ms to run NodePressure ...
	I0224 00:57:21.664624  156119 start.go:228] waiting for startup goroutines ...
	I0224 00:57:21.664634  156119 start.go:233] waiting for cluster config update ...
	I0224 00:57:21.664651  156119 start.go:242] writing updated cluster config ...
	I0224 00:57:21.667301  156119 out.go:177] 
	I0224 00:57:21.668878  156119 config.go:182] Loaded profile config "multinode-461512": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 00:57:21.668957  156119 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/config.json ...
	I0224 00:57:21.670818  156119 out.go:177] * Starting worker node multinode-461512-m02 in cluster multinode-461512
	I0224 00:57:21.672112  156119 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 00:57:21.673508  156119 out.go:177] * Pulling base image ...
	I0224 00:57:21.675167  156119 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 00:57:21.675188  156119 cache.go:57] Caching tarball of preloaded images
	I0224 00:57:21.675191  156119 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 00:57:21.675271  156119 preload.go:174] Found /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 00:57:21.675287  156119 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 00:57:21.675391  156119 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/config.json ...
	I0224 00:57:21.740022  156119 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 00:57:21.740046  156119 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 00:57:21.740068  156119 cache.go:193] Successfully downloaded all kic artifacts
	I0224 00:57:21.740101  156119 start.go:364] acquiring machines lock for multinode-461512-m02: {Name:mk0c24cecb0f2bb7442eab1def0480438fceaed3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 00:57:21.740198  156119 start.go:368] acquired machines lock for "multinode-461512-m02" in 79.668µs
	I0224 00:57:21.740221  156119 start.go:93] Provisioning new machine with config: &{Name:multinode-461512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-461512 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0224 00:57:21.740296  156119 start.go:125] createHost starting for "m02" (driver="docker")
	I0224 00:57:21.742330  156119 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0224 00:57:21.742446  156119 start.go:159] libmachine.API.Create for "multinode-461512" (driver="docker")
	I0224 00:57:21.742474  156119 client.go:168] LocalClient.Create starting
	I0224 00:57:21.742557  156119 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem
	I0224 00:57:21.742595  156119 main.go:141] libmachine: Decoding PEM data...
	I0224 00:57:21.742616  156119 main.go:141] libmachine: Parsing certificate...
	I0224 00:57:21.742669  156119 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem
	I0224 00:57:21.742690  156119 main.go:141] libmachine: Decoding PEM data...
	I0224 00:57:21.742699  156119 main.go:141] libmachine: Parsing certificate...
	I0224 00:57:21.742882  156119 cli_runner.go:164] Run: docker network inspect multinode-461512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 00:57:21.803637  156119 network_create.go:76] Found existing network {name:multinode-461512 subnet:0xc00137e270 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0224 00:57:21.803671  156119 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-461512-m02" container
	I0224 00:57:21.803721  156119 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0224 00:57:21.864339  156119 cli_runner.go:164] Run: docker volume create multinode-461512-m02 --label name.minikube.sigs.k8s.io=multinode-461512-m02 --label created_by.minikube.sigs.k8s.io=true
	I0224 00:57:21.925959  156119 oci.go:103] Successfully created a docker volume multinode-461512-m02
	I0224 00:57:21.926036  156119 cli_runner.go:164] Run: docker run --rm --name multinode-461512-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-461512-m02 --entrypoint /usr/bin/test -v multinode-461512-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0224 00:57:22.523715  156119 oci.go:107] Successfully prepared a docker volume multinode-461512-m02
	I0224 00:57:22.523755  156119 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 00:57:22.523774  156119 kic.go:190] Starting extracting preloaded images to volume ...
	I0224 00:57:22.523826  156119 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-461512-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0224 00:57:27.366312  156119 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-461512-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (4.842437991s)
	I0224 00:57:27.366338  156119 kic.go:199] duration metric: took 4.842561 seconds to extract preloaded images to volume
	W0224 00:57:27.366475  156119 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0224 00:57:27.366588  156119 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0224 00:57:27.484704  156119 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-461512-m02 --name multinode-461512-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-461512-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-461512-m02 --network multinode-461512 --ip 192.168.58.3 --volume multinode-461512-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0224 00:57:27.913966  156119 cli_runner.go:164] Run: docker container inspect multinode-461512-m02 --format={{.State.Running}}
	I0224 00:57:27.980030  156119 cli_runner.go:164] Run: docker container inspect multinode-461512-m02 --format={{.State.Status}}
	I0224 00:57:28.048591  156119 cli_runner.go:164] Run: docker exec multinode-461512-m02 stat /var/lib/dpkg/alternatives/iptables
	I0224 00:57:28.168673  156119 oci.go:144] the created container "multinode-461512-m02" has a running status.
	I0224 00:57:28.168709  156119 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa...
	I0224 00:57:28.247375  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0224 00:57:28.247417  156119 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0224 00:57:28.371424  156119 cli_runner.go:164] Run: docker container inspect multinode-461512-m02 --format={{.State.Status}}
	I0224 00:57:28.442342  156119 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0224 00:57:28.442367  156119 kic_runner.go:114] Args: [docker exec --privileged multinode-461512-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0224 00:57:28.556209  156119 cli_runner.go:164] Run: docker container inspect multinode-461512-m02 --format={{.State.Status}}
	I0224 00:57:28.620388  156119 machine.go:88] provisioning docker machine ...
	I0224 00:57:28.620421  156119 ubuntu.go:169] provisioning hostname "multinode-461512-m02"
	I0224 00:57:28.620479  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:28.683699  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:57:28.684119  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0224 00:57:28.684133  156119 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-461512-m02 && echo "multinode-461512-m02" | sudo tee /etc/hostname
	I0224 00:57:28.821854  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-461512-m02
	
	I0224 00:57:28.821928  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:28.883891  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:57:28.884325  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0224 00:57:28.884343  156119 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-461512-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-461512-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-461512-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 00:57:29.017350  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 00:57:29.017377  156119 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15909-3785/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-3785/.minikube}
	I0224 00:57:29.017390  156119 ubuntu.go:177] setting up certificates
	I0224 00:57:29.017397  156119 provision.go:83] configureAuth start
	I0224 00:57:29.017443  156119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461512-m02
	I0224 00:57:29.082607  156119 provision.go:138] copyHostCerts
	I0224 00:57:29.082649  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem
	I0224 00:57:29.082675  156119 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem, removing ...
	I0224 00:57:29.082684  156119 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem
	I0224 00:57:29.082743  156119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem (1078 bytes)
	I0224 00:57:29.082807  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem
	I0224 00:57:29.082826  156119 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem, removing ...
	I0224 00:57:29.082833  156119 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem
	I0224 00:57:29.082855  156119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem (1123 bytes)
	I0224 00:57:29.082895  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem
	I0224 00:57:29.082911  156119 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem, removing ...
	I0224 00:57:29.082917  156119 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem
	I0224 00:57:29.082935  156119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem (1675 bytes)
	I0224 00:57:29.082977  156119 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca-key.pem org=jenkins.multinode-461512-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-461512-m02]
	I0224 00:57:29.384338  156119 provision.go:172] copyRemoteCerts
	I0224 00:57:29.384393  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 00:57:29.384423  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:29.447725  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa Username:docker}
	I0224 00:57:29.541343  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0224 00:57:29.541398  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 00:57:29.558035  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0224 00:57:29.558112  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0224 00:57:29.574265  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0224 00:57:29.574309  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 00:57:29.589986  156119 provision.go:86] duration metric: configureAuth took 572.577934ms
	I0224 00:57:29.590008  156119 ubuntu.go:193] setting minikube options for container-runtime
	I0224 00:57:29.590178  156119 config.go:182] Loaded profile config "multinode-461512": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 00:57:29.590223  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:29.652331  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:57:29.652777  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0224 00:57:29.652791  156119 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 00:57:29.781589  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 00:57:29.781613  156119 ubuntu.go:71] root file system type: overlay
	I0224 00:57:29.781744  156119 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 00:57:29.781807  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:29.844419  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:57:29.844870  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0224 00:57:29.844933  156119 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 00:57:29.986246  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 00:57:29.986309  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:30.049834  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:57:30.050268  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0224 00:57:30.050289  156119 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 00:57:30.687544  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 00:57:29.979032345 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0224 00:57:30.687580  156119 machine.go:91] provisioned docker machine in 2.067171191s
	I0224 00:57:30.687592  156119 client.go:171] LocalClient.Create took 8.945109003s
	I0224 00:57:30.687611  156119 start.go:167] duration metric: libmachine.API.Create for "multinode-461512" took 8.945165168s
	I0224 00:57:30.687620  156119 start.go:300] post-start starting for "multinode-461512-m02" (driver="docker")
	I0224 00:57:30.687629  156119 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 00:57:30.687699  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 00:57:30.687750  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:30.752342  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa Username:docker}
	I0224 00:57:30.844917  156119 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 00:57:30.847316  156119 command_runner.go:130] > NAME="Ubuntu"
	I0224 00:57:30.847332  156119 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0224 00:57:30.847336  156119 command_runner.go:130] > ID=ubuntu
	I0224 00:57:30.847341  156119 command_runner.go:130] > ID_LIKE=debian
	I0224 00:57:30.847347  156119 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0224 00:57:30.847351  156119 command_runner.go:130] > VERSION_ID="20.04"
	I0224 00:57:30.847356  156119 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0224 00:57:30.847361  156119 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0224 00:57:30.847366  156119 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0224 00:57:30.847377  156119 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0224 00:57:30.847382  156119 command_runner.go:130] > VERSION_CODENAME=focal
	I0224 00:57:30.847385  156119 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0224 00:57:30.847447  156119 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 00:57:30.847461  156119 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 00:57:30.847469  156119 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 00:57:30.847479  156119 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 00:57:30.847489  156119 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3785/.minikube/addons for local assets ...
	I0224 00:57:30.847529  156119 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3785/.minikube/files for local assets ...
	I0224 00:57:30.847587  156119 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem -> 104702.pem in /etc/ssl/certs
	I0224 00:57:30.847596  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem -> /etc/ssl/certs/104702.pem
	I0224 00:57:30.847670  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 00:57:30.853775  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem --> /etc/ssl/certs/104702.pem (1708 bytes)
	I0224 00:57:30.869542  156119 start.go:303] post-start completed in 181.911089ms
	I0224 00:57:30.869862  156119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461512-m02
	I0224 00:57:30.931451  156119 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/config.json ...
	I0224 00:57:30.931708  156119 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 00:57:30.931749  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:30.993353  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa Username:docker}
	I0224 00:57:31.081629  156119 command_runner.go:130] > 16%!
	(MISSING)I0224 00:57:31.081908  156119 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 00:57:31.085246  156119 command_runner.go:130] > 245G
	I0224 00:57:31.085430  156119 start.go:128] duration metric: createHost completed in 9.345126362s
	I0224 00:57:31.085446  156119 start.go:83] releasing machines lock for "multinode-461512-m02", held for 9.345235208s
	I0224 00:57:31.085505  156119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461512-m02
	I0224 00:57:31.150235  156119 out.go:177] * Found network options:
	I0224 00:57:31.151641  156119 out.go:177]   - NO_PROXY=192.168.58.2
	W0224 00:57:31.152933  156119 proxy.go:119] fail to check proxy env: Error ip not in block
	W0224 00:57:31.152972  156119 proxy.go:119] fail to check proxy env: Error ip not in block
	I0224 00:57:31.153040  156119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 00:57:31.153083  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:31.153101  156119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 00:57:31.153157  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:31.222012  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa Username:docker}
	I0224 00:57:31.223120  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa Username:docker}
	I0224 00:57:31.346553  156119 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0224 00:57:31.347703  156119 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0224 00:57:31.347719  156119 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0224 00:57:31.347725  156119 command_runner.go:130] > Device: c5h/197d	Inode: 1319702     Links: 1
	I0224 00:57:31.347745  156119 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 00:57:31.347757  156119 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0224 00:57:31.347768  156119 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0224 00:57:31.347776  156119 command_runner.go:130] > Change: 2023-02-24 00:41:21.061607898 +0000
	I0224 00:57:31.347782  156119 command_runner.go:130] >  Birth: -
	I0224 00:57:31.347834  156119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0224 00:57:31.366229  156119 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0224 00:57:31.366285  156119 ssh_runner.go:195] Run: which cri-dockerd
	I0224 00:57:31.368696  156119 command_runner.go:130] > /usr/bin/cri-dockerd
	I0224 00:57:31.368888  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 00:57:31.374837  156119 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 00:57:31.386217  156119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 00:57:31.399774  156119 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0224 00:57:31.399831  156119 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0224 00:57:31.399848  156119 start.go:485] detecting cgroup driver to use...
	I0224 00:57:31.399871  156119 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 00:57:31.399959  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 00:57:31.410925  156119 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0224 00:57:31.410946  156119 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0224 00:57:31.411946  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 00:57:31.419796  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 00:57:31.426753  156119 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 00:57:31.426798  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 00:57:31.433598  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 00:57:31.440432  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 00:57:31.447234  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 00:57:31.453987  156119 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 00:57:31.460171  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 00:57:31.466867  156119 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 00:57:31.471985  156119 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0224 00:57:31.472480  156119 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 00:57:31.478215  156119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 00:57:31.547538  156119 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 00:57:31.618255  156119 start.go:485] detecting cgroup driver to use...
	I0224 00:57:31.618305  156119 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 00:57:31.618357  156119 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 00:57:31.628809  156119 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0224 00:57:31.628832  156119 command_runner.go:130] > [Unit]
	I0224 00:57:31.628844  156119 command_runner.go:130] > Description=Docker Application Container Engine
	I0224 00:57:31.628854  156119 command_runner.go:130] > Documentation=https://docs.docker.com
	I0224 00:57:31.628861  156119 command_runner.go:130] > BindsTo=containerd.service
	I0224 00:57:31.628871  156119 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0224 00:57:31.628882  156119 command_runner.go:130] > Wants=network-online.target
	I0224 00:57:31.628892  156119 command_runner.go:130] > Requires=docker.socket
	I0224 00:57:31.628902  156119 command_runner.go:130] > StartLimitBurst=3
	I0224 00:57:31.628912  156119 command_runner.go:130] > StartLimitIntervalSec=60
	I0224 00:57:31.628921  156119 command_runner.go:130] > [Service]
	I0224 00:57:31.628931  156119 command_runner.go:130] > Type=notify
	I0224 00:57:31.628941  156119 command_runner.go:130] > Restart=on-failure
	I0224 00:57:31.628954  156119 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0224 00:57:31.628969  156119 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0224 00:57:31.628983  156119 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0224 00:57:31.629003  156119 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0224 00:57:31.629018  156119 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0224 00:57:31.629032  156119 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0224 00:57:31.629045  156119 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0224 00:57:31.629061  156119 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0224 00:57:31.629080  156119 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0224 00:57:31.629094  156119 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0224 00:57:31.629104  156119 command_runner.go:130] > ExecStart=
	I0224 00:57:31.629130  156119 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0224 00:57:31.629148  156119 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0224 00:57:31.629158  156119 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0224 00:57:31.629172  156119 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0224 00:57:31.629182  156119 command_runner.go:130] > LimitNOFILE=infinity
	I0224 00:57:31.629192  156119 command_runner.go:130] > LimitNPROC=infinity
	I0224 00:57:31.629199  156119 command_runner.go:130] > LimitCORE=infinity
	I0224 00:57:31.629213  156119 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0224 00:57:31.629225  156119 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0224 00:57:31.629235  156119 command_runner.go:130] > TasksMax=infinity
	I0224 00:57:31.629244  156119 command_runner.go:130] > TimeoutStartSec=0
	I0224 00:57:31.629257  156119 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0224 00:57:31.629264  156119 command_runner.go:130] > Delegate=yes
	I0224 00:57:31.629287  156119 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0224 00:57:31.629298  156119 command_runner.go:130] > KillMode=process
	I0224 00:57:31.629308  156119 command_runner.go:130] > [Install]
	I0224 00:57:31.629319  156119 command_runner.go:130] > WantedBy=multi-user.target
	I0224 00:57:31.629344  156119 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 00:57:31.629395  156119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 00:57:31.638296  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 00:57:31.650466  156119 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 00:57:31.650494  156119 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 00:57:31.651341  156119 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 00:57:31.758941  156119 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 00:57:31.845154  156119 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 00:57:31.845186  156119 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 00:57:31.860738  156119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 00:57:31.940338  156119 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 00:57:32.136014  156119 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 00:57:32.209062  156119 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0224 00:57:32.209129  156119 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 00:57:32.286125  156119 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 00:57:32.361521  156119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 00:57:32.441481  156119 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 00:57:32.452095  156119 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 00:57:32.452144  156119 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 00:57:32.454825  156119 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0224 00:57:32.454846  156119 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0224 00:57:32.454856  156119 command_runner.go:130] > Device: ceh/206d	Inode: 206         Links: 1
	I0224 00:57:32.454866  156119 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0224 00:57:32.454877  156119 command_runner.go:130] > Access: 2023-02-24 00:57:32.443280111 +0000
	I0224 00:57:32.454890  156119 command_runner.go:130] > Modify: 2023-02-24 00:57:32.443280111 +0000
	I0224 00:57:32.454906  156119 command_runner.go:130] > Change: 2023-02-24 00:57:32.447280514 +0000
	I0224 00:57:32.454913  156119 command_runner.go:130] >  Birth: -
	I0224 00:57:32.454927  156119 start.go:553] Will wait 60s for crictl version
	I0224 00:57:32.454969  156119 ssh_runner.go:195] Run: which crictl
	I0224 00:57:32.457330  156119 command_runner.go:130] > /usr/bin/crictl
	I0224 00:57:32.457479  156119 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 00:57:32.532332  156119 command_runner.go:130] > Version:  0.1.0
	I0224 00:57:32.532353  156119 command_runner.go:130] > RuntimeName:  docker
	I0224 00:57:32.532358  156119 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0224 00:57:32.532363  156119 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0224 00:57:32.532381  156119 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0224 00:57:32.532420  156119 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 00:57:32.551686  156119 command_runner.go:130] > 23.0.1
	I0224 00:57:32.552625  156119 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 00:57:32.571374  156119 command_runner.go:130] > 23.0.1
	I0224 00:57:32.574917  156119 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0224 00:57:32.576410  156119 out.go:177]   - env NO_PROXY=192.168.58.2
	I0224 00:57:32.577911  156119 cli_runner.go:164] Run: docker network inspect multinode-461512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 00:57:32.641068  156119 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0224 00:57:32.644187  156119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 00:57:32.653326  156119 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512 for IP: 192.168.58.3
	I0224 00:57:32.653360  156119 certs.go:186] acquiring lock for shared ca certs: {Name:mk4ccb66e3fb9104eb70d9107cb5563409a81019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:57:32.653502  156119 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.key
	I0224 00:57:32.653551  156119 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.key
	I0224 00:57:32.653573  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0224 00:57:32.653592  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0224 00:57:32.653605  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0224 00:57:32.653621  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0224 00:57:32.653689  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470.pem (1338 bytes)
	W0224 00:57:32.653729  156119 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470_empty.pem, impossibly tiny 0 bytes
	I0224 00:57:32.653744  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 00:57:32.653780  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem (1078 bytes)
	I0224 00:57:32.653810  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem (1123 bytes)
	I0224 00:57:32.653841  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem (1675 bytes)
	I0224 00:57:32.653900  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem (1708 bytes)
	I0224 00:57:32.653933  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470.pem -> /usr/share/ca-certificates/10470.pem
	I0224 00:57:32.653953  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem -> /usr/share/ca-certificates/104702.pem
	I0224 00:57:32.653971  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:57:32.654351  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 00:57:32.671022  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 00:57:32.687023  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 00:57:32.704568  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 00:57:32.720792  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470.pem --> /usr/share/ca-certificates/10470.pem (1338 bytes)
	I0224 00:57:32.736669  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem --> /usr/share/ca-certificates/104702.pem (1708 bytes)
	I0224 00:57:32.751862  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 00:57:32.767766  156119 ssh_runner.go:195] Run: openssl version
	I0224 00:57:32.771845  156119 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0224 00:57:32.772107  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10470.pem && ln -fs /usr/share/ca-certificates/10470.pem /etc/ssl/certs/10470.pem"
	I0224 00:57:32.778681  156119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10470.pem
	I0224 00:57:32.781540  156119 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 00:45 /usr/share/ca-certificates/10470.pem
	I0224 00:57:32.781594  156119 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:45 /usr/share/ca-certificates/10470.pem
	I0224 00:57:32.781627  156119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10470.pem
	I0224 00:57:32.786013  156119 command_runner.go:130] > 51391683
	I0224 00:57:32.786224  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10470.pem /etc/ssl/certs/51391683.0"
	I0224 00:57:32.792555  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/104702.pem && ln -fs /usr/share/ca-certificates/104702.pem /etc/ssl/certs/104702.pem"
	I0224 00:57:32.799554  156119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/104702.pem
	I0224 00:57:32.802124  156119 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 00:45 /usr/share/ca-certificates/104702.pem
	I0224 00:57:32.802247  156119 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:45 /usr/share/ca-certificates/104702.pem
	I0224 00:57:32.802329  156119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/104702.pem
	I0224 00:57:32.806261  156119 command_runner.go:130] > 3ec20f2e
	I0224 00:57:32.806397  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/104702.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 00:57:32.812732  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 00:57:32.819108  156119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:57:32.821882  156119 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:57:32.821913  156119 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:57:32.821939  156119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:57:32.826159  156119 command_runner.go:130] > b5213941
	I0224 00:57:32.826196  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 00:57:32.832571  156119 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 00:57:32.855037  156119 command_runner.go:130] > cgroupfs
	I0224 00:57:32.855091  156119 cni.go:84] Creating CNI manager for ""
	I0224 00:57:32.855103  156119 cni.go:136] 2 nodes found, recommending kindnet
	I0224 00:57:32.855117  156119 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 00:57:32.855141  156119 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-461512 NodeName:multinode-461512-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 00:57:32.855273  156119 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-461512-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 00:57:32.855344  156119 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-461512-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-461512 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 00:57:32.855397  156119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 00:57:32.861684  156119 command_runner.go:130] > kubeadm
	I0224 00:57:32.861697  156119 command_runner.go:130] > kubectl
	I0224 00:57:32.861701  156119 command_runner.go:130] > kubelet
	I0224 00:57:32.862318  156119 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 00:57:32.862375  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0224 00:57:32.868604  156119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0224 00:57:32.880424  156119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 00:57:32.891986  156119 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0224 00:57:32.894555  156119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 00:57:32.902817  156119 host.go:66] Checking if "multinode-461512" exists ...
	I0224 00:57:32.903035  156119 config.go:182] Loaded profile config "multinode-461512": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 00:57:32.903027  156119 start.go:301] JoinCluster: &{Name:multinode-461512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-461512 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 00:57:32.903094  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0224 00:57:32.903126  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:57:32.965906  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:57:33.108115  156119 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ioyax0.yxmw6naapbho79wq --discovery-token-ca-cert-hash sha256:bc80f60e14a6b9b559fc179e503c895fcccd0d05d03dee10e43de88c94ec0cb4 
	I0224 00:57:33.108169  156119 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0224 00:57:33.108198  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ioyax0.yxmw6naapbho79wq --discovery-token-ca-cert-hash sha256:bc80f60e14a6b9b559fc179e503c895fcccd0d05d03dee10e43de88c94ec0cb4 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-461512-m02"
	I0224 00:57:33.142214  156119 command_runner.go:130] > [preflight] Running pre-flight checks
	I0224 00:57:33.165745  156119 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0224 00:57:33.165770  156119 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1029-gcp
	I0224 00:57:33.165783  156119 command_runner.go:130] > OS: Linux
	I0224 00:57:33.165791  156119 command_runner.go:130] > CGROUPS_CPU: enabled
	I0224 00:57:33.165804  156119 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0224 00:57:33.165811  156119 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0224 00:57:33.165816  156119 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0224 00:57:33.165826  156119 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0224 00:57:33.165834  156119 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0224 00:57:33.165840  156119 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0224 00:57:33.165845  156119 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0224 00:57:33.165850  156119 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0224 00:57:33.241499  156119 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0224 00:57:33.241531  156119 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0224 00:57:33.266172  156119 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 00:57:33.266256  156119 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 00:57:33.266271  156119 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0224 00:57:33.349062  156119 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0224 00:57:34.868204  156119 command_runner.go:130] > This node has joined the cluster:
	I0224 00:57:34.868233  156119 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0224 00:57:34.868243  156119 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0224 00:57:34.868254  156119 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0224 00:57:34.870639  156119 command_runner.go:130] ! W0224 00:57:33.141898    1336 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 00:57:34.870668  156119 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1029-gcp\n", err: exit status 1
	I0224 00:57:34.870680  156119 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 00:57:34.870704  156119 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ioyax0.yxmw6naapbho79wq --discovery-token-ca-cert-hash sha256:bc80f60e14a6b9b559fc179e503c895fcccd0d05d03dee10e43de88c94ec0cb4 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-461512-m02": (1.762488468s)
	I0224 00:57:34.870727  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0224 00:57:35.060590  156119 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0224 00:57:35.060621  156119 start.go:303] JoinCluster complete in 2.157594183s
	I0224 00:57:35.060633  156119 cni.go:84] Creating CNI manager for ""
	I0224 00:57:35.060637  156119 cni.go:136] 2 nodes found, recommending kindnet
	I0224 00:57:35.060676  156119 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0224 00:57:35.063963  156119 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0224 00:57:35.063985  156119 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0224 00:57:35.063992  156119 command_runner.go:130] > Device: 34h/52d	Inode: 1317791     Links: 1
	I0224 00:57:35.063998  156119 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 00:57:35.064003  156119 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0224 00:57:35.064009  156119 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0224 00:57:35.064016  156119 command_runner.go:130] > Change: 2023-02-24 00:41:20.329534418 +0000
	I0224 00:57:35.064020  156119 command_runner.go:130] >  Birth: -
	I0224 00:57:35.064058  156119 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0224 00:57:35.064070  156119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0224 00:57:35.075913  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0224 00:57:35.221111  156119 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0224 00:57:35.223962  156119 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0224 00:57:35.226322  156119 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0224 00:57:35.236323  156119 command_runner.go:130] > daemonset.apps/kindnet configured
	I0224 00:57:35.239986  156119 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:57:35.240233  156119 kapi.go:59] client config for multinode-461512: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 00:57:35.240549  156119 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 00:57:35.240561  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.240569  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.240579  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.242080  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.242101  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.242111  156119 round_trippers.go:580]     Audit-Id: 22e449c9-a66b-4718-9176-731a0bfb42db
	I0224 00:57:35.242127  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.242140  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.242162  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.242175  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.242182  156119 round_trippers.go:580]     Content-Length: 291
	I0224 00:57:35.242193  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.242224  156119 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ccf3ec87-77f6-42ea-8caa-6941529dafd4","resourceVersion":"437","creationTimestamp":"2023-02-24T00:56:50Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0224 00:57:35.242315  156119 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-461512" context rescaled to 1 replicas
	I0224 00:57:35.242352  156119 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0224 00:57:35.244595  156119 out.go:177] * Verifying Kubernetes components...
	I0224 00:57:35.245968  156119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 00:57:35.255278  156119 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:57:35.255519  156119 kapi.go:59] client config for multinode-461512: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 00:57:35.255773  156119 node_ready.go:35] waiting up to 6m0s for node "multinode-461512-m02" to be "Ready" ...
	I0224 00:57:35.255832  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512-m02
	I0224 00:57:35.255843  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.255854  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.255867  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.257401  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.257421  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.257438  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.257447  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.257464  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.257475  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.257483  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.257494  156119 round_trippers.go:580]     Audit-Id: 578e6a5a-cddd-4783-a035-101bc94b08b4
	I0224 00:57:35.257596  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512-m02","uid":"232a61b1-45e8-4ecf-9a67-09c1f1394e3f","resourceVersion":"482","creationTimestamp":"2023-02-24T00:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0224 00:57:35.257911  156119 node_ready.go:49] node "multinode-461512-m02" has status "Ready":"True"
	I0224 00:57:35.257924  156119 node_ready.go:38] duration metric: took 2.135663ms waiting for node "multinode-461512-m02" to be "Ready" ...
	I0224 00:57:35.257933  156119 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 00:57:35.257995  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0224 00:57:35.258004  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.258011  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.258024  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.260522  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:35.260538  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.260545  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.260553  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.260562  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.260574  156119 round_trippers.go:580]     Audit-Id: b38f0cfd-9d7c-4c7c-a390-3a049475d308
	I0224 00:57:35.260584  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.260596  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.261095  156119 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"482"},"items":[{"metadata":{"name":"coredns-787d4945fb-r6m7z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8c8eb92c-c99a-4eea-8518-bd2bac5df023","resourceVersion":"433","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65541 chars]
	I0224 00:57:35.263496  156119 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-r6m7z" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.263550  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-r6m7z
	I0224 00:57:35.263557  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.263565  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.263571  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.265078  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.265098  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.265109  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.265122  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.265134  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.265143  156119 round_trippers.go:580]     Audit-Id: 225db0a9-2390-4fe5-bb77-ddcd53227ee8
	I0224 00:57:35.265158  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.265171  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.265289  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-r6m7z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8c8eb92c-c99a-4eea-8518-bd2bac5df023","resourceVersion":"433","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0224 00:57:35.265704  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:35.265717  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.265724  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.265730  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.267226  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.267245  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.267254  156119 round_trippers.go:580]     Audit-Id: 74dae4a2-590a-42e8-9456-1eac83edc16f
	I0224 00:57:35.267263  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.267272  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.267289  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.267302  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.267316  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.267411  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"440","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0224 00:57:35.267697  156119 pod_ready.go:92] pod "coredns-787d4945fb-r6m7z" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:35.267710  156119 pod_ready.go:81] duration metric: took 4.19524ms waiting for pod "coredns-787d4945fb-r6m7z" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.267721  156119 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.267766  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-461512
	I0224 00:57:35.267775  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.267785  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.267796  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.269318  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.269334  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.269344  156119 round_trippers.go:580]     Audit-Id: b4bfb09d-e1e6-4207-95c4-e410b8e5d3e0
	I0224 00:57:35.269353  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.269367  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.269375  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.269384  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.269394  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.269461  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-461512","namespace":"kube-system","uid":"85634add-ee6f-426e-8dce-c5bd503ada85","resourceVersion":"279","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"755375775ca4908a1a35224e40dd8da8","kubernetes.io/config.mirror":"755375775ca4908a1a35224e40dd8da8","kubernetes.io/config.seen":"2023-02-24T00:56:50.894583011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0224 00:57:35.269769  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:35.269781  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.269788  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.269794  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.271183  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.271204  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.271214  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.271225  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.271236  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.271247  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.271260  156119 round_trippers.go:580]     Audit-Id: 9b061dc6-e39d-4590-ac10-1e0e51c3fa00
	I0224 00:57:35.271272  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.271366  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"440","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0224 00:57:35.271648  156119 pod_ready.go:92] pod "etcd-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:35.271658  156119 pod_ready.go:81] duration metric: took 3.930301ms waiting for pod "etcd-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.271669  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.271708  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-461512
	I0224 00:57:35.271715  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.271721  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.271728  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.273022  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.273043  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.273053  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.273060  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.273065  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.273071  156119 round_trippers.go:580]     Audit-Id: 5abc9a7f-5a67-4924-b8e9-104d6635d5c5
	I0224 00:57:35.273079  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.273088  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.273180  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-461512","namespace":"kube-system","uid":"915d077c-7a17-4c95-9199-8146800a171b","resourceVersion":"382","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"4c6cb11c2c301f276f12bb7545f0af61","kubernetes.io/config.mirror":"4c6cb11c2c301f276f12bb7545f0af61","kubernetes.io/config.seen":"2023-02-24T00:56:50.894613111Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0224 00:57:35.273552  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:35.273562  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.273569  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.273575  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.274832  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.274845  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.274853  156119 round_trippers.go:580]     Audit-Id: 294499bf-a8d4-4fdc-b834-b71e69e7fb8a
	I0224 00:57:35.274862  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.274870  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.274883  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.274896  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.274908  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.274986  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"440","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0224 00:57:35.275253  156119 pod_ready.go:92] pod "kube-apiserver-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:35.275264  156119 pod_ready.go:81] duration metric: took 3.589866ms waiting for pod "kube-apiserver-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.275271  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.275306  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-461512
	I0224 00:57:35.275314  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.275320  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.275326  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.276595  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.276613  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.276621  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.276627  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.276633  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.276641  156119 round_trippers.go:580]     Audit-Id: 74edde08-68ae-43a3-b3cb-9a62ed698a3c
	I0224 00:57:35.276649  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.276657  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.276787  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-461512","namespace":"kube-system","uid":"8e426bcd-dab9-430d-b166-f7ab34013208","resourceVersion":"274","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c1d525744bc3189fa4b6ceed33e9b7b6","kubernetes.io/config.mirror":"c1d525744bc3189fa4b6ceed33e9b7b6","kubernetes.io/config.seen":"2023-02-24T00:56:50.894614692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0224 00:57:35.277138  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:35.277150  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.277157  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.277163  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.278432  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.278448  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.278455  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.278464  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.278474  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.278486  156119 round_trippers.go:580]     Audit-Id: 585cdc6c-4ec4-4fdf-8600-4449ed6e569c
	I0224 00:57:35.278509  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.278519  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.278633  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"440","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0224 00:57:35.278890  156119 pod_ready.go:92] pod "kube-controller-manager-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:35.278900  156119 pod_ready.go:81] duration metric: took 3.62409ms waiting for pod "kube-controller-manager-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.278907  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dvmbp" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.456319  156119 request.go:622] Waited for 177.362601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvmbp
	I0224 00:57:35.456376  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvmbp
	I0224 00:57:35.456380  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.456388  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.456397  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.458241  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.458263  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.458274  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.458288  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.458297  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.458305  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.458319  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.458331  156119 round_trippers.go:580]     Audit-Id: 3c317fc6-efbe-434f-bc24-7aff9effd134
	I0224 00:57:35.458464  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dvmbp","generateName":"kube-proxy-","namespace":"kube-system","uid":"e9e9bac2-7132-4b60-a535-80b6113e0e8d","resourceVersion":"392","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ac4eac56-21ca-4f1f-a0d6-df82bff382f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ac4eac56-21ca-4f1f-a0d6-df82bff382f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0224 00:57:35.656236  156119 request.go:622] Waited for 197.348674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:35.656300  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:35.656308  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.656320  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.656334  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.658206  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.658225  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.658235  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.658245  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.658254  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.658264  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.658274  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.658286  156119 round_trippers.go:580]     Audit-Id: 9f228e37-8b47-4cd9-b341-29b1cce2bf2f
	I0224 00:57:35.658367  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"440","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0224 00:57:35.658703  156119 pod_ready.go:92] pod "kube-proxy-dvmbp" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:35.658715  156119 pod_ready.go:81] duration metric: took 379.802212ms waiting for pod "kube-proxy-dvmbp" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.658724  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-phwrs" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.856047  156119 request.go:622] Waited for 197.270982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-phwrs
	I0224 00:57:35.856114  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-phwrs
	I0224 00:57:35.856123  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.856131  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.856138  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.857867  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.857885  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.857891  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.857897  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.857903  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.857908  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.857914  156119 round_trippers.go:580]     Audit-Id: de9b2af4-a722-42d6-b783-e741fb59335b
	I0224 00:57:35.857919  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.858011  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-phwrs","generateName":"kube-proxy-","namespace":"kube-system","uid":"0c1df716-d306-4932-ac62-f5d9ebd74cdb","resourceVersion":"469","creationTimestamp":"2023-02-24T00:57:34Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ac4eac56-21ca-4f1f-a0d6-df82bff382f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ac4eac56-21ca-4f1f-a0d6-df82bff382f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0224 00:57:36.056813  156119 request.go:622] Waited for 198.369913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-461512-m02
	I0224 00:57:36.056882  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512-m02
	I0224 00:57:36.056889  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:36.056897  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:36.056909  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:36.058838  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:36.058870  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:36.058881  156119 round_trippers.go:580]     Audit-Id: a375d4ba-c6ec-436c-a7ea-ad9ec0be8ac2
	I0224 00:57:36.058888  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:36.058894  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:36.058900  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:36.058906  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:36.058914  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:36 GMT
	I0224 00:57:36.059017  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512-m02","uid":"232a61b1-45e8-4ecf-9a67-09c1f1394e3f","resourceVersion":"482","creationTimestamp":"2023-02-24T00:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0224 00:57:36.560091  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-phwrs
	I0224 00:57:36.560165  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:36.560189  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:36.560206  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:36.562586  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:36.562654  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:36.562674  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:36.562692  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:36.562717  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:36 GMT
	I0224 00:57:36.562745  156119 round_trippers.go:580]     Audit-Id: f6beb059-32be-4ed1-8b61-f36615f67007
	I0224 00:57:36.562763  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:36.562778  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:36.562922  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-phwrs","generateName":"kube-proxy-","namespace":"kube-system","uid":"0c1df716-d306-4932-ac62-f5d9ebd74cdb","resourceVersion":"483","creationTimestamp":"2023-02-24T00:57:34Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ac4eac56-21ca-4f1f-a0d6-df82bff382f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ac4eac56-21ca-4f1f-a0d6-df82bff382f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0224 00:57:36.563528  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512-m02
	I0224 00:57:36.563558  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:36.563575  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:36.563607  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:36.565407  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:36.565464  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:36.565483  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:36.565501  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:36.565528  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:36.565549  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:36 GMT
	I0224 00:57:36.565574  156119 round_trippers.go:580]     Audit-Id: b826fba0-67ca-4966-afcd-feb3fe207fd1
	I0224 00:57:36.565594  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:36.565714  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512-m02","uid":"232a61b1-45e8-4ecf-9a67-09c1f1394e3f","resourceVersion":"482","creationTimestamp":"2023-02-24T00:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0224 00:57:37.060449  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-phwrs
	I0224 00:57:37.060472  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:37.060488  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:37.060499  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:37.062860  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:37.062899  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:37.062910  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:37.062924  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:37.062937  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:37.062950  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:37 GMT
	I0224 00:57:37.062961  156119 round_trippers.go:580]     Audit-Id: 42c3a841-711f-479a-8667-78c05be6250e
	I0224 00:57:37.062974  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:37.063119  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-phwrs","generateName":"kube-proxy-","namespace":"kube-system","uid":"0c1df716-d306-4932-ac62-f5d9ebd74cdb","resourceVersion":"491","creationTimestamp":"2023-02-24T00:57:34Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ac4eac56-21ca-4f1f-a0d6-df82bff382f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ac4eac56-21ca-4f1f-a0d6-df82bff382f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0224 00:57:37.063599  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512-m02
	I0224 00:57:37.063613  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:37.063624  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:37.063633  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:37.065355  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:37.065376  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:37.065387  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:37.065395  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:37.065407  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:37.065416  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:37.065425  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:37 GMT
	I0224 00:57:37.065436  156119 round_trippers.go:580]     Audit-Id: 85cf5451-24a8-45ed-a421-617c2740162c
	I0224 00:57:37.065566  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512-m02","uid":"232a61b1-45e8-4ecf-9a67-09c1f1394e3f","resourceVersion":"482","creationTimestamp":"2023-02-24T00:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0224 00:57:37.065849  156119 pod_ready.go:92] pod "kube-proxy-phwrs" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:37.065870  156119 pod_ready.go:81] duration metric: took 1.40713953s waiting for pod "kube-proxy-phwrs" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:37.065885  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:37.065943  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-461512
	I0224 00:57:37.065951  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:37.065960  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:37.065973  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:37.067725  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:37.067753  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:37.067763  156119 round_trippers.go:580]     Audit-Id: 567cc883-333a-49c3-b68e-f253a42841d7
	I0224 00:57:37.067771  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:37.067782  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:37.067791  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:37.067800  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:37.067814  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:37 GMT
	I0224 00:57:37.067906  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-461512","namespace":"kube-system","uid":"64f3ef30-ed87-42cc-b0e2-cd3c7c922383","resourceVersion":"280","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6d86c9f2cb44969723080e3b260936ff","kubernetes.io/config.mirror":"6d86c9f2cb44969723080e3b260936ff","kubernetes.io/config.seen":"2023-02-24T00:56:50.894615981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0224 00:57:37.256644  156119 request.go:622] Waited for 188.369517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:37.256726  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:37.256741  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:37.256757  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:37.256771  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:37.259181  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:37.259205  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:37.259216  156119 round_trippers.go:580]     Audit-Id: b1afc5d8-eb41-4761-9e50-95912e19243c
	I0224 00:57:37.259224  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:37.259237  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:37.259247  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:37.259261  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:37.259274  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:37 GMT
	I0224 00:57:37.259390  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"440","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0224 00:57:37.259695  156119 pod_ready.go:92] pod "kube-scheduler-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:37.259708  156119 pod_ready.go:81] duration metric: took 193.811569ms waiting for pod "kube-scheduler-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:37.259721  156119 pod_ready.go:38] duration metric: took 2.001773946s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 00:57:37.259745  156119 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 00:57:37.259793  156119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 00:57:37.271419  156119 system_svc.go:56] duration metric: took 11.666861ms WaitForService to wait for kubelet.
	I0224 00:57:37.271443  156119 kubeadm.go:578] duration metric: took 2.029060625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0224 00:57:37.271459  156119 node_conditions.go:102] verifying NodePressure condition ...
	I0224 00:57:37.456890  156119 request.go:622] Waited for 185.359286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0224 00:57:37.456974  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0224 00:57:37.456986  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:37.457003  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:37.457018  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:37.459450  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:37.459477  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:37.459488  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:37 GMT
	I0224 00:57:37.459499  156119 round_trippers.go:580]     Audit-Id: 926ea2fc-9682-4288-abe8-366a6e931c81
	I0224 00:57:37.459511  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:37.459525  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:37.459535  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:37.459553  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:37.459762  156119 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"440","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10265 chars]
	I0224 00:57:37.460364  156119 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0224 00:57:37.460382  156119 node_conditions.go:123] node cpu capacity is 8
	I0224 00:57:37.460391  156119 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0224 00:57:37.460395  156119 node_conditions.go:123] node cpu capacity is 8
	I0224 00:57:37.460401  156119 node_conditions.go:105] duration metric: took 188.938984ms to run NodePressure ...
	I0224 00:57:37.460412  156119 start.go:228] waiting for startup goroutines ...
	I0224 00:57:37.460446  156119 start.go:242] writing updated cluster config ...
	I0224 00:57:37.460894  156119 ssh_runner.go:195] Run: rm -f paused
	I0224 00:57:37.523560  156119 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0224 00:57:37.526010  156119 out.go:177] * Done! kubectl is now configured to use "multinode-461512" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-24 00:56:34 UTC, end at Fri 2023-02-24 00:57:42 UTC. --
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642123853Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642149728Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642159802Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642197277Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642232110Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642267960Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642301399Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642346191Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642374405Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642597931Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642636495Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.643080766Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.653654103Z" level=info msg="Loading containers: start."
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.726308188Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.757602777Z" level=info msg="Loading containers: done."
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.766013325Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.766084796Z" level=info msg="Daemon has completed initialization"
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.778254487Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 24 00:56:37 multinode-461512 systemd[1]: Started Docker Application Container Engine.
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.784890254Z" level=info msg="API listen on [::]:2376"
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.788986365Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 24 00:57:18 multinode-461512 dockerd[942]: time="2023-02-24T00:57:18.849389166Z" level=info msg="ignoring event" container=18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 00:57:18 multinode-461512 dockerd[942]: time="2023-02-24T00:57:18.909337692Z" level=info msg="ignoring event" container=d20fb8d35594351811e98e88cc7bbbc92fe03e5e7dade38f76fada0dc3532673 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 00:57:18 multinode-461512 dockerd[942]: time="2023-02-24T00:57:18.975380266Z" level=info msg="ignoring event" container=e42bbd739d7352d417880430dda0aa46923d501cf050bf2f8cba81cd285a8c95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 00:57:19 multinode-461512 dockerd[942]: time="2023-02-24T00:57:19.060141729Z" level=info msg="ignoring event" container=6a7397548127f04391e2ea61c9147e8b7c0c83c2e388ce94493c4924a2c0a5af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	7b87c544078ac       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 seconds ago       Running             busybox                   0                   b922299f9d89e
	20fba57c87c13       5185b96f0becf                                                                                         23 seconds ago      Running             coredns                   1                   781b48ae8dfaa
	6eb35688e880e       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              36 seconds ago      Running             kindnet-cni               0                   dd0a98d9fc833
	7b6c5ea21c78d       6e38f40d628db                                                                                         36 seconds ago      Running             storage-provisioner       0                   4f736381c5b2d
	e42bbd739d735       5185b96f0becf                                                                                         37 seconds ago      Exited              coredns                   0                   6a7397548127f
	14dbbf3c014be       46a6bb3c77ce0                                                                                         38 seconds ago      Running             kube-proxy                0                   3ef84eaf12535
	b6625d6f60721       e9c08e11b07f6                                                                                         57 seconds ago      Running             kube-controller-manager   0                   0358ca89ade14
	21a2538a45b03       fce326961ae2d                                                                                         57 seconds ago      Running             etcd                      0                   679fa69b2a76c
	66406a6af762d       655493523f607                                                                                         57 seconds ago      Running             kube-scheduler            0                   63f9f3e248e49
	7bfce1d4138f9       deb04688c4a35                                                                                         57 seconds ago      Running             kube-apiserver            0                   53b2492db94e2
	
	* 
	* ==> coredns [20fba57c87c1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:35048 - 6704 "HINFO IN 4938656220510300332.7239367872624460590. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007526579s
	[INFO] 10.244.0.3:43092 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198848s
	[INFO] 10.244.0.3:42969 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.011827633s
	[INFO] 10.244.0.3:37111 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.010840995s
	[INFO] 10.244.0.3:49101 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.008721735s
	[INFO] 10.244.0.3:34707 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139228s
	[INFO] 10.244.0.3:51138 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006605112s
	[INFO] 10.244.0.3:33076 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160291s
	[INFO] 10.244.0.3:53026 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174372s
	[INFO] 10.244.0.3:55111 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008365317s
	[INFO] 10.244.0.3:35024 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101642s
	[INFO] 10.244.0.3:39296 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125824s
	[INFO] 10.244.0.3:48645 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110462s
	[INFO] 10.244.0.3:57841 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130253s
	[INFO] 10.244.0.3:34265 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094239s
	[INFO] 10.244.0.3:40254 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010202s
	[INFO] 10.244.0.3:56855 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101307s
	
	* 
	* ==> coredns [e42bbd739d73] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 334083178742072081.1122851254239722435. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 334083178742072081.1122851254239722435. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-461512
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-461512
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c13299ce0b45f38f7f45d3bc31124c3ea59c0510
	                    minikube.k8s.io/name=multinode-461512
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_24T00_56_51_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 00:56:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-461512
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 00:57:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 00:57:21 +0000   Fri, 24 Feb 2023 00:56:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 00:57:21 +0000   Fri, 24 Feb 2023 00:56:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 00:57:21 +0000   Fri, 24 Feb 2023 00:56:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 00:57:21 +0000   Fri, 24 Feb 2023 00:56:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-461512
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                9ba42aef-3183-4f44-952b-05c49f22ad59
	  Boot ID:                    fd195a10-b2a0-490a-9b98-4841e110d2e2
	  Kernel Version:             5.15.0-1029-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-tj597                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 coredns-787d4945fb-r6m7z                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     39s
	  kube-system                 etcd-multinode-461512                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         51s
	  kube-system                 kindnet-5p4bl                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      39s
	  kube-system                 kube-apiserver-multinode-461512             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kube-controller-manager-multinode-461512    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kube-proxy-dvmbp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-scheduler-multinode-461512             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 38s                kube-proxy       
	  Normal  NodeHasSufficientMemory  60s (x4 over 61s)  kubelet          Node multinode-461512 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x4 over 61s)  kubelet          Node multinode-461512 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x4 over 61s)  kubelet          Node multinode-461512 status is now: NodeHasSufficientPID
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s                kubelet          Node multinode-461512 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s                kubelet          Node multinode-461512 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s                kubelet          Node multinode-461512 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             51s                kubelet          Node multinode-461512 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  51s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                51s                kubelet          Node multinode-461512 status is now: NodeReady
	  Normal  RegisteredNode           40s                node-controller  Node multinode-461512 event: Registered Node multinode-461512 in Controller
	
	
	Name:               multinode-461512-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-461512-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 00:57:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-461512-m02" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 00:57:34 +0000   Fri, 24 Feb 2023 00:57:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 00:57:34 +0000   Fri, 24 Feb 2023 00:57:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 00:57:34 +0000   Fri, 24 Feb 2023 00:57:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 00:57:34 +0000   Fri, 24 Feb 2023 00:57:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-461512-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                8d7d4998-5372-4a35-b974-d4f494ff6737
	  Boot ID:                    fd195a10-b2a0-490a-9b98-4841e110d2e2
	  Kernel Version:             5.15.0-1029-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-5jg4x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kindnet-6xvgj               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8s
	  kube-system                 kube-proxy-phwrs            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 6s               kube-proxy       
	  Normal  Starting                 9s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x2 over 9s)  kubelet          Node multinode-461512-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x2 over 9s)  kubelet          Node multinode-461512-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x2 over 9s)  kubelet          Node multinode-461512-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8s               kubelet          Node multinode-461512-m02 status is now: NodeReady
	  Normal  RegisteredNode           5s               node-controller  Node multinode-461512-m02 event: Registered Node multinode-461512-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.008747] FS-Cache: O-key=[8] '86a00f0200000000'
	[  +0.006294] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007952] FS-Cache: N-cookie d=00000000a2895d09{9p.inode} n=00000000b5c2b24e
	[  +0.007360] FS-Cache: N-key=[8] '86a00f0200000000'
	[  +3.026314] FS-Cache: Duplicate cookie detected
	[  +0.004684] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006737] FS-Cache: O-cookie d=00000000a2895d09{9p.inode} n=000000001405c4ca
	[  +0.007347] FS-Cache: O-key=[8] '85a00f0200000000'
	[  +0.004931] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006672] FS-Cache: N-cookie d=00000000a2895d09{9p.inode} n=00000000dd6525a0
	[  +0.008733] FS-Cache: N-key=[8] '85a00f0200000000'
	[  +0.476759] FS-Cache: Duplicate cookie detected
	[  +0.004695] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006744] FS-Cache: O-cookie d=00000000a2895d09{9p.inode} n=0000000071597208
	[  +0.007366] FS-Cache: O-key=[8] '8da00f0200000000'
	[  +0.004941] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006605] FS-Cache: N-cookie d=00000000a2895d09{9p.inode} n=0000000018960a10
	[  +0.007376] FS-Cache: N-key=[8] '8da00f0200000000'
	[  +7.278389] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a2 93 1b 91 86 10 08 06
	[Feb24 00:49] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Feb24 00:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 66 2b c2 36 52 08 06
	[Feb24 00:55] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be 5f 99 5d 1e ae 08 06
	
	* 
	* ==> etcd [21a2538a45b0] <==
	* {"level":"info","ts":"2023-02-24T00:56:45.651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-02-24T00:56:45.652Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-02-24T00:56:45.653Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-24T00:56:45.653Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-24T00:56:45.653Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-24T00:56:45.653Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-24T00:56:45.653Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-24T00:56:46.278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-02-24T00:56:46.278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-02-24T00:56:46.278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-02-24T00:56:46.278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-02-24T00:56:46.278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-24T00:56:46.278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-02-24T00:56:46.278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-24T00:56:46.279Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-461512 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-24T00:56:46.279Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T00:56:46.279Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T00:56:46.279Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T00:56:46.279Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T00:56:46.280Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T00:56:46.280Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T00:56:46.280Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-24T00:56:46.280Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-02-24T00:56:46.281Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-24T00:56:46.281Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:57:42 up 40 min,  0 users,  load average: 2.40, 1.91, 1.31
	Linux multinode-461512 5.15.0-1029-gcp #36~20.04.1-Ubuntu SMP Tue Jan 24 16:54:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [6eb35688e880] <==
	* I0224 00:57:06.949029       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0224 00:57:06.949064       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0224 00:57:06.949165       1 main.go:116] setting mtu 1500 for CNI 
	I0224 00:57:06.949176       1 main.go:146] kindnetd IP family: "ipv4"
	I0224 00:57:06.949196       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0224 00:57:07.251040       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 00:57:07.251067       1 main.go:227] handling current node
	I0224 00:57:17.361449       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 00:57:17.361484       1 main.go:227] handling current node
	I0224 00:57:27.373478       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 00:57:27.373509       1 main.go:227] handling current node
	I0224 00:57:37.385885       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 00:57:37.385915       1 main.go:227] handling current node
	I0224 00:57:37.385927       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0224 00:57:37.385935       1 main.go:250] Node multinode-461512-m02 has CIDR [10.244.1.0/24] 
	I0224 00:57:37.386142       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [7bfce1d4138f] <==
	* I0224 00:56:48.023240       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0224 00:56:48.023264       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0224 00:56:48.023287       1 cache.go:39] Caches are synced for autoregister controller
	I0224 00:56:48.023294       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0224 00:56:48.023295       1 shared_informer.go:280] Caches are synced for configmaps
	I0224 00:56:48.023329       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0224 00:56:48.023466       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0224 00:56:48.025934       1 controller.go:615] quota admission added evaluator for: namespaces
	I0224 00:56:48.098212       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0224 00:56:48.717677       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0224 00:56:48.927009       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0224 00:56:48.930593       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0224 00:56:48.930610       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0224 00:56:49.319477       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 00:56:49.348154       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0224 00:56:49.463666       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0224 00:56:49.470272       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0224 00:56:49.471100       1 controller.go:615] quota admission added evaluator for: endpoints
	I0224 00:56:49.474535       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0224 00:56:49.962979       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0224 00:56:50.828467       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0224 00:56:50.837589       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0224 00:56:50.845708       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0224 00:57:03.268863       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0224 00:57:03.618091       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [b6625d6f6072] <==
	* I0224 00:57:02.895423       1 shared_informer.go:280] Caches are synced for service account
	I0224 00:57:02.923859       1 shared_informer.go:280] Caches are synced for namespace
	I0224 00:57:02.969000       1 shared_informer.go:280] Caches are synced for disruption
	I0224 00:57:02.977292       1 shared_informer.go:280] Caches are synced for resource quota
	I0224 00:57:03.030504       1 shared_informer.go:280] Caches are synced for resource quota
	I0224 00:57:03.272456       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 2"
	I0224 00:57:03.343751       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 00:57:03.415059       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 00:57:03.415080       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0224 00:57:03.627402       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dvmbp"
	I0224 00:57:03.627438       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5p4bl"
	I0224 00:57:03.821160       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-9ws7r"
	I0224 00:57:03.827143       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-r6m7z"
	I0224 00:57:04.053158       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0224 00:57:04.058770       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-9ws7r"
	W0224 00:57:34.146710       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-461512-m02" does not exist
	I0224 00:57:34.153002       1 range_allocator.go:372] Set node multinode-461512-m02 PodCIDR to [10.244.1.0/24]
	I0224 00:57:34.156431       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-phwrs"
	I0224 00:57:34.158467       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6xvgj"
	W0224 00:57:34.861795       1 topologycache.go:232] Can't get CPU or zone information for multinode-461512-m02 node
	W0224 00:57:37.820186       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-461512-m02. Assuming now as a timestamp.
	I0224 00:57:37.820309       1 event.go:294] "Event occurred" object="multinode-461512-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-461512-m02 event: Registered Node multinode-461512-m02 in Controller"
	I0224 00:57:38.577084       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0224 00:57:38.584932       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-5jg4x"
	I0224 00:57:38.590210       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-tj597"
	
	* 
	* ==> kube-proxy [14dbbf3c014b] <==
	* I0224 00:57:04.575725       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0224 00:57:04.575800       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0224 00:57:04.575825       1 server_others.go:535] "Using iptables proxy"
	I0224 00:57:04.671989       1 server_others.go:176] "Using iptables Proxier"
	I0224 00:57:04.672043       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0224 00:57:04.672059       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0224 00:57:04.672084       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0224 00:57:04.672110       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0224 00:57:04.672443       1 server.go:655] "Version info" version="v1.26.1"
	I0224 00:57:04.672456       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 00:57:04.673273       1 config.go:444] "Starting node config controller"
	I0224 00:57:04.673283       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0224 00:57:04.673608       1 config.go:317] "Starting service config controller"
	I0224 00:57:04.673614       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0224 00:57:04.673635       1 config.go:226] "Starting endpoint slice config controller"
	I0224 00:57:04.673639       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0224 00:57:04.773689       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0224 00:57:04.773730       1 shared_informer.go:280] Caches are synced for node config
	I0224 00:57:04.773692       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [66406a6af762] <==
	* W0224 00:56:47.972975       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0224 00:56:47.972992       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0224 00:56:47.972998       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0224 00:56:47.973003       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0224 00:56:47.972988       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0224 00:56:47.973010       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0224 00:56:47.973019       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0224 00:56:47.973021       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0224 00:56:48.928005       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0224 00:56:48.928032       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0224 00:56:48.987666       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0224 00:56:48.987702       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0224 00:56:49.070184       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0224 00:56:49.070214       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0224 00:56:49.082957       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0224 00:56:49.082998       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0224 00:56:49.102941       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0224 00:56:49.102971       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0224 00:56:49.148793       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0224 00:56:49.148818       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0224 00:56:49.183926       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0224 00:56:49.183955       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0224 00:56:49.256012       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0224 00:56:49.256038       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0224 00:56:52.069555       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-24 00:56:34 UTC, end at Fri 2023-02-24 00:57:43 UTC. --
	Feb 24 00:57:06 multinode-461512 kubelet[2303]: I0224 00:57:06.879762    2303 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-9ws7r" podStartSLOduration=3.879719521 pod.CreationTimestamp="2023-02-24 00:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 00:57:06.879187419 +0000 UTC m=+16.070938576" watchObservedRunningTime="2023-02-24 00:57:06.879719521 +0000 UTC m=+16.071470667"
	Feb 24 00:57:07 multinode-461512 kubelet[2303]: I0224 00:57:07.240889    2303 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-r6m7z" podStartSLOduration=4.24084762 pod.CreationTimestamp="2023-02-24 00:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 00:57:07.240515608 +0000 UTC m=+16.432266753" watchObservedRunningTime="2023-02-24 00:57:07.24084762 +0000 UTC m=+16.432598763"
	Feb 24 00:57:07 multinode-461512 kubelet[2303]: I0224 00:57:07.639420    2303 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.6393830339999997 pod.CreationTimestamp="2023-02-24 00:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 00:57:07.639016106 +0000 UTC m=+16.830767454" watchObservedRunningTime="2023-02-24 00:57:07.639383034 +0000 UTC m=+16.831134180"
	Feb 24 00:57:08 multinode-461512 kubelet[2303]: I0224 00:57:08.041748    2303 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-5p4bl" podStartSLOduration=-9.223372031813065e+09 pod.CreationTimestamp="2023-02-24 00:57:03 +0000 UTC" firstStartedPulling="2023-02-24 00:57:04.477212916 +0000 UTC m=+13.668964045" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 00:57:08.041458959 +0000 UTC m=+17.233210103" watchObservedRunningTime="2023-02-24 00:57:08.041710347 +0000 UTC m=+17.233461491"
	Feb 24 00:57:11 multinode-461512 kubelet[2303]: I0224 00:57:11.649444    2303 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 24 00:57:11 multinode-461512 kubelet[2303]: I0224 00:57:11.650239    2303 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 24 00:57:18 multinode-461512 kubelet[2303]: I0224 00:57:18.997269    2303 scope.go:115] "RemoveContainer" containerID="18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9"
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.010931    2303 scope.go:115] "RemoveContainer" containerID="18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9"
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: E0224 00:57:19.011611    2303 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9" containerID="18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9"
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.011659    2303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9} err="failed to get container status \"18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9\": rpc error: code = Unknown desc = Error: No such container: 18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9"
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.073868    2303 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d-config-volume\") pod \"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d\" (UID: \"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d\") "
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.073918    2303 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x48v5\" (UniqueName: \"kubernetes.io/projected/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d-kube-api-access-x48v5\") pod \"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d\" (UID: \"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d\") "
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: W0224 00:57:19.074158    2303 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.074360    2303 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d-config-volume" (OuterVolumeSpecName: "config-volume") pod "4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d" (UID: "4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.075664    2303 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d-kube-api-access-x48v5" (OuterVolumeSpecName: "kube-api-access-x48v5") pod "4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d" (UID: "4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d"). InnerVolumeSpecName "kube-api-access-x48v5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.175090    2303 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-x48v5\" (UniqueName: \"kubernetes.io/projected/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d-kube-api-access-x48v5\") on node \"multinode-461512\" DevicePath \"\""
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.175121    2303 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d-config-volume\") on node \"multinode-461512\" DevicePath \"\""
	Feb 24 00:57:20 multinode-461512 kubelet[2303]: I0224 00:57:20.014110    2303 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a7397548127f04391e2ea61c9147e8b7c0c83c2e388ce94493c4924a2c0a5af"
	Feb 24 00:57:20 multinode-461512 kubelet[2303]: I0224 00:57:20.983453    2303 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d path="/var/lib/kubelet/pods/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d/volumes"
	Feb 24 00:57:38 multinode-461512 kubelet[2303]: I0224 00:57:38.594449    2303 topology_manager.go:210] "Topology Admit Handler"
	Feb 24 00:57:38 multinode-461512 kubelet[2303]: E0224 00:57:38.594529    2303 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d" containerName="coredns"
	Feb 24 00:57:38 multinode-461512 kubelet[2303]: I0224 00:57:38.594566    2303 memory_manager.go:346] "RemoveStaleState removing state" podUID="4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d" containerName="coredns"
	Feb 24 00:57:38 multinode-461512 kubelet[2303]: I0224 00:57:38.785698    2303 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfh7n\" (UniqueName: \"kubernetes.io/projected/0da6e203-810a-4320-8612-085e93ef297c-kube-api-access-wfh7n\") pod \"busybox-6b86dd6d48-tj597\" (UID: \"0da6e203-810a-4320-8612-085e93ef297c\") " pod="default/busybox-6b86dd6d48-tj597"
	Feb 24 00:57:39 multinode-461512 kubelet[2303]: I0224 00:57:39.131438    2303 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b922299f9d89e2207ff7631d4da8d198dc1d0e29717d959b857522e26d7636ce"
	Feb 24 00:57:40 multinode-461512 kubelet[2303]: I0224 00:57:40.153699    2303 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-tj597" podStartSLOduration=-9.22337203470111e+09 pod.CreationTimestamp="2023-02-24 00:57:38 +0000 UTC" firstStartedPulling="2023-02-24 00:57:39.150002642 +0000 UTC m=+48.341753799" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 00:57:40.153274036 +0000 UTC m=+49.345025183" watchObservedRunningTime="2023-02-24 00:57:40.153665196 +0000 UTC m=+49.345416340"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-461512 -n multinode-461512
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-461512 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (5.54s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-461512 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-461512 -- exec busybox-6b86dd6d48-5jg4x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: minikube host ip is nil: 
** stderr ** 
	nslookup: can't resolve 'host.minikube.internal'

                                                
                                                
** /stderr **
multinode_test.go:558: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-461512 -- exec busybox-6b86dd6d48-5jg4x -- sh -c "ping -c 1 <nil>"
multinode_test.go:558: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-461512 -- exec busybox-6b86dd6d48-5jg4x -- sh -c "ping -c 1 <nil>": exit status 2 (175.469741ms)

                                                
                                                
** stderr ** 
	sh: syntax error: unexpected end of file
	command terminated with exit code 2

                                                
                                                
** /stderr **
multinode_test.go:559: Failed to ping host (<nil>) from pod (busybox-6b86dd6d48-5jg4x): exit status 2
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-461512 -- exec busybox-6b86dd6d48-tj597 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-461512 -- exec busybox-6b86dd6d48-tj597 -- sh -c "ping -c 1 192.168.58.1"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-461512
helpers_test.go:235: (dbg) docker inspect multinode-461512:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8075ab3952c8c07e2d002c8a5458b9bc0c59ce90bc9690656e8d98b634ec87cd",
	        "Created": "2023-02-24T00:56:33.396639879Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 157116,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T00:56:33.759260602Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/8075ab3952c8c07e2d002c8a5458b9bc0c59ce90bc9690656e8d98b634ec87cd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8075ab3952c8c07e2d002c8a5458b9bc0c59ce90bc9690656e8d98b634ec87cd/hostname",
	        "HostsPath": "/var/lib/docker/containers/8075ab3952c8c07e2d002c8a5458b9bc0c59ce90bc9690656e8d98b634ec87cd/hosts",
	        "LogPath": "/var/lib/docker/containers/8075ab3952c8c07e2d002c8a5458b9bc0c59ce90bc9690656e8d98b634ec87cd/8075ab3952c8c07e2d002c8a5458b9bc0c59ce90bc9690656e8d98b634ec87cd-json.log",
	        "Name": "/multinode-461512",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-461512:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-461512",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e0c6db3d3a172ae33d534dbee1904935476b19f2016351ab7903192a530f397f-init/diff:/var/lib/docker/overlay2/1fe70832e0138bde815d3a324f05e073d3a1973b42aab12c10645a466ed7b978/diff:/var/lib/docker/overlay2/82b0ba24239050b2590c61fc0ca025cbcbc12de3239fa738d35253f8a5c7e972/diff:/var/lib/docker/overlay2/97149d64e56c6be885da441a048f01a2e6f93535d07240c3a6b69c63f1503930/diff:/var/lib/docker/overlay2/1ac1e7cc44d30a56fbdcbf72e6b5dab7e724aa5966044fe460e65cf440be551d/diff:/var/lib/docker/overlay2/1d2ef923b05561e68505c25afff7e0f7a174db5781e4bc0b09d004587941568a/diff:/var/lib/docker/overlay2/f6d602b2c8869a40598f32afb833eaff656758f7bd22e56b071c49e0c797ea46/diff:/var/lib/docker/overlay2/e27675cfda80daa6b54ccbc8d9b24d33061cb4a28b57e557c3d0607b1ca8c5fd/diff:/var/lib/docker/overlay2/743ac428a80ae93a9d3d1f3434309bec5bdf6ebb19ecb8f7f908698be8564088/diff:/var/lib/docker/overlay2/20ac9915298b6bc6d584f78851d401364c718a502a2859ddd9fd8401a19a7480/diff:/var/lib/docker/overlay2/36d165
c0301a63cdcbb14cf9e744eb4a46c6ff10b22e23ba1a98af8b792f377a/diff:/var/lib/docker/overlay2/38a6fe7c24710dcfc6bfd9640daf24d6f0033b8344c402c8c4a612982897a3ce/diff:/var/lib/docker/overlay2/3fdb857d38e4c0bc84111616dfc7ab74ba6995e518e517d3e2a0c14dfadc4ef8/diff:/var/lib/docker/overlay2/b1f93ca1a74f0de690373822899bcac40eacbedc6fde9a1a0b6fb748ee87db9f/diff:/var/lib/docker/overlay2/119805d1ad6f1abe3a4051c29db755a23aa5e0cc6c5216db76476a2a0b956630/diff:/var/lib/docker/overlay2/1fad59af19b8a00d817ce511b7e6b3be39ee5da67959bf3ea6050a902141b1b9/diff:/var/lib/docker/overlay2/a8d6b25a155af696d2dde78d17214a6c8b9f867b78c211c9ed1daa887f364de8/diff:/var/lib/docker/overlay2/07f6f4f06c8e18bfa8b104132cff43b5dc0f64ffb4b4c341a745abf1c058d1aa/diff:/var/lib/docker/overlay2/6146dc9e49b7cfd840dcf83603ba5654eedbdabdeba6a47ed37b9540df95b3dc/diff:/var/lib/docker/overlay2/9301871dd3992fd37d4fa495e588c9f044e10e341734e02997f3a08855c3a647/diff:/var/lib/docker/overlay2/f08d255565f3007a7033097b84d48dc5964bb491ae9da7d54ef75d803422941d/diff:/var/lib/d
ocker/overlay2/ffb7dfc431d833298f37b17ba73910970ca4887e4562867226090c024809b030/diff:/var/lib/docker/overlay2/c1fa340a85c3ccb353f2ec68e4d4208507a1fc339b0e63c299489a5ddbe5db6e/diff:/var/lib/docker/overlay2/aed9b5b3204bf14e554aeecd998e1d08f11b2c4b4643aa3942993e5bbfdcdea5/diff:/var/lib/docker/overlay2/f92f0a0a890930b99a18863e62c3af3b1ca4118f511f31f25fb30f5816f1e306/diff:/var/lib/docker/overlay2/a6001e111530a9b76c2f1f6eaa5983d7471ad99301e26a1a29e1e7e14c46fc25/diff:/var/lib/docker/overlay2/158bea0dc6daf4c80fa121667ef2be88c0e7c4dc6dc4eabfd2125a30403a7310/diff:/var/lib/docker/overlay2/0b082ffe105ffd42019f3bb0591e92c600fec4fdba58983da7ef71201342da2f/diff:/var/lib/docker/overlay2/85c8564cb266fc69d105571e429342a3a1e618f1ef232777f2f9dc0cfb7843dc/diff:/var/lib/docker/overlay2/ffee666ae571252d864d8129270279455332344b4cf1f50b5533483c483e0e29/diff:/var/lib/docker/overlay2/aa9f0f59d766b30da23b419f0ef65398bc8519903d407b98385baf7cdec79efd/diff:/var/lib/docker/overlay2/f322b716bd3a78423d2d0e16d77fbee15b4bd0803d0e65b024a925a14a7
a790a/diff:/var/lib/docker/overlay2/38b6941a9d9af30dc4abbeea1ed9f50331f557067e3f8f73e60e92669853a6b2/diff:/var/lib/docker/overlay2/4b7af9ea8b3868fc54ff26975a23a0aa3b2fdfb167e1536d80daeee27e98038c/diff:/var/lib/docker/overlay2/ee5f2aa02324c5ad9abf88568938efa32cbeeeee74b5b8bf25849922f7f34c40/diff:/var/lib/docker/overlay2/5b6043dd38472ee71b161257d55d7299454a3361c73bf42f91e41fcf318222a8/diff:/var/lib/docker/overlay2/5206772ef11c6059618ba392a15959b7a08cf16d6ecdd1acb3b7ae9b863309cf/diff:/var/lib/docker/overlay2/8b7c2d24480675d9b691b006217d2af5ed3a334f1cdbceaf50bb672c29508a0a/diff:/var/lib/docker/overlay2/3dfde0dfcb9c56924e3ecbd3ea8ebe3cac8fc1f018d7af0c25db468c6b4c56a5/diff:/var/lib/docker/overlay2/eea5f975c03e242f48308673b4fc38cf4c71bc091d7efcfa599618c68445f42a/diff:/var/lib/docker/overlay2/6ac45c1fa26e00015e1cbf85278c90a6332b7c174a6387ce98d3ec9aed6a4b38/diff:/var/lib/docker/overlay2/a661f542744c32a937ab0f1940b933cf03ef63ab8a41a662c4965de9ec1af7de/diff:/var/lib/docker/overlay2/f1da32cae243bb1c1811c9899935e81a61930c
b1a9dea9b2846986f62b09252d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e0c6db3d3a172ae33d534dbee1904935476b19f2016351ab7903192a530f397f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e0c6db3d3a172ae33d534dbee1904935476b19f2016351ab7903192a530f397f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e0c6db3d3a172ae33d534dbee1904935476b19f2016351ab7903192a530f397f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-461512",
	                "Source": "/var/lib/docker/volumes/multinode-461512/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-461512",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-461512",
	                "name.minikube.sigs.k8s.io": "multinode-461512",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "163a6c74464ae8cfbbeac5751d9ff1163430d531f6a8bdf0a7bf165fd2d7285f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32852"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32851"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32848"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32850"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32849"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/163a6c74464a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-461512": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8075ab3952c8",
	                        "multinode-461512"
	                    ],
	                    "NetworkID": "17a26df4c936d295b7bf8159a236e6bc3a572797bfdc484aaa781501d0671db6",
	                    "EndpointID": "a717154286d4b3174f793c93183e3f11a93236484cc26df4bd1dfb6a5a9e3f9f",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-461512 -n multinode-461512
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-461512 logs -n 25: (1.020312313s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-999466                           | mount-start-2-999466 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| ssh     | mount-start-2-999466 ssh -- ls                    | mount-start-2-999466 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-980786                           | mount-start-1-980786 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-999466 ssh -- ls                    | mount-start-2-999466 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-999466                           | mount-start-2-999466 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	| start   | -p mount-start-2-999466                           | mount-start-2-999466 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	| ssh     | mount-start-2-999466 ssh -- ls                    | mount-start-2-999466 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-999466                           | mount-start-2-999466 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	| delete  | -p mount-start-1-980786                           | mount-start-1-980786 | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:56 UTC |
	| start   | -p multinode-461512                               | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:56 UTC | 24 Feb 23 00:57 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- apply -f                   | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- rollout                    | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- get pods -o                | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- get pods -o                | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC |                     |
	|         | busybox-6b86dd6d48-5jg4x --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | busybox-6b86dd6d48-tj597 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC |                     |
	|         | busybox-6b86dd6d48-5jg4x --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | busybox-6b86dd6d48-tj597 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC |                     |
	|         | busybox-6b86dd6d48-5jg4x -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | busybox-6b86dd6d48-tj597 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- get pods -o                | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | busybox-6b86dd6d48-5jg4x                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC |                     |
	|         | busybox-6b86dd6d48-5jg4x -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 <nil>                                |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | busybox-6b86dd6d48-tj597                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-461512 -- exec                       | multinode-461512     | jenkins | v1.29.0 | 24 Feb 23 00:57 UTC | 24 Feb 23 00:57 UTC |
	|         | busybox-6b86dd6d48-tj597 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 00:56:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 00:56:26.873911  156119 out.go:296] Setting OutFile to fd 1 ...
	I0224 00:56:26.874003  156119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:56:26.874011  156119 out.go:309] Setting ErrFile to fd 2...
	I0224 00:56:26.874015  156119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:56:26.874153  156119 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3785/.minikube/bin
	I0224 00:56:26.874692  156119 out.go:303] Setting JSON to false
	I0224 00:56:26.876076  156119 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2336,"bootTime":1677197851,"procs":979,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 00:56:26.876137  156119 start.go:135] virtualization: kvm guest
	I0224 00:56:26.878547  156119 out.go:177] * [multinode-461512] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 00:56:26.880035  156119 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 00:56:26.880048  156119 notify.go:220] Checking for updates...
	I0224 00:56:26.881550  156119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 00:56:26.883930  156119 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:56:26.885601  156119 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3785/.minikube
	I0224 00:56:26.887060  156119 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 00:56:26.888472  156119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 00:56:26.889938  156119 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 00:56:26.958114  156119 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0224 00:56:26.958212  156119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 00:56:27.075270  156119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:32 SystemTime:2023-02-24 00:56:27.067045184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 00:56:27.075365  156119 docker.go:294] overlay module found
	I0224 00:56:27.077524  156119 out.go:177] * Using the docker driver based on user configuration
	I0224 00:56:27.078929  156119 start.go:296] selected driver: docker
	I0224 00:56:27.078940  156119 start.go:857] validating driver "docker" against <nil>
	I0224 00:56:27.078951  156119 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 00:56:27.079708  156119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 00:56:27.193563  156119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:32 SystemTime:2023-02-24 00:56:27.184958376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 00:56:27.193689  156119 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0224 00:56:27.193940  156119 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 00:56:27.196229  156119 out.go:177] * Using Docker driver with root privileges
	I0224 00:56:27.198040  156119 cni.go:84] Creating CNI manager for ""
	I0224 00:56:27.198055  156119 cni.go:136] 0 nodes found, recommending kindnet
	I0224 00:56:27.198077  156119 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0224 00:56:27.198090  156119 start_flags.go:319] config:
	{Name:multinode-461512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-461512 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 00:56:27.200007  156119 out.go:177] * Starting control plane node multinode-461512 in cluster multinode-461512
	I0224 00:56:27.201495  156119 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 00:56:27.203079  156119 out.go:177] * Pulling base image ...
	I0224 00:56:27.204602  156119 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 00:56:27.204631  156119 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0224 00:56:27.204640  156119 cache.go:57] Caching tarball of preloaded images
	I0224 00:56:27.204698  156119 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 00:56:27.204711  156119 preload.go:174] Found /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 00:56:27.204799  156119 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 00:56:27.205118  156119 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/config.json ...
	I0224 00:56:27.205141  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/config.json: {Name:mkc5f17fe6300edcab127e334799db6103cd1896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:27.267676  156119 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 00:56:27.267703  156119 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 00:56:27.267720  156119 cache.go:193] Successfully downloaded all kic artifacts
	I0224 00:56:27.267760  156119 start.go:364] acquiring machines lock for multinode-461512: {Name:mk1450fd8b60e8292ab20dfb5f293bf4c24349b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 00:56:27.267849  156119 start.go:368] acquired machines lock for "multinode-461512" in 69.552µs
	I0224 00:56:27.267872  156119 start.go:93] Provisioning new machine with config: &{Name:multinode-461512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-461512 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 00:56:27.267939  156119 start.go:125] createHost starting for "" (driver="docker")
	I0224 00:56:27.270255  156119 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0224 00:56:27.270440  156119 start.go:159] libmachine.API.Create for "multinode-461512" (driver="docker")
	I0224 00:56:27.270467  156119 client.go:168] LocalClient.Create starting
	I0224 00:56:27.270517  156119 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem
	I0224 00:56:27.270551  156119 main.go:141] libmachine: Decoding PEM data...
	I0224 00:56:27.270567  156119 main.go:141] libmachine: Parsing certificate...
	I0224 00:56:27.270619  156119 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem
	I0224 00:56:27.270636  156119 main.go:141] libmachine: Decoding PEM data...
	I0224 00:56:27.270647  156119 main.go:141] libmachine: Parsing certificate...
	I0224 00:56:27.270912  156119 cli_runner.go:164] Run: docker network inspect multinode-461512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0224 00:56:27.333077  156119 cli_runner.go:211] docker network inspect multinode-461512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0224 00:56:27.333140  156119 network_create.go:281] running [docker network inspect multinode-461512] to gather additional debugging logs...
	I0224 00:56:27.333160  156119 cli_runner.go:164] Run: docker network inspect multinode-461512
	W0224 00:56:27.394423  156119 cli_runner.go:211] docker network inspect multinode-461512 returned with exit code 1
	I0224 00:56:27.394449  156119 network_create.go:284] error running [docker network inspect multinode-461512]: docker network inspect multinode-461512: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-461512 not found
	I0224 00:56:27.394460  156119 network_create.go:286] output of [docker network inspect multinode-461512]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-461512 not found
	
	** /stderr **
	I0224 00:56:27.394503  156119 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 00:56:27.455645  156119 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-05e4e9615d36 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:8d:c7:71:a1} reservation:<nil>}
	I0224 00:56:27.456139  156119 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00119dd90}
	I0224 00:56:27.456162  156119 network_create.go:123] attempt to create docker network multinode-461512 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0224 00:56:27.456214  156119 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-461512 multinode-461512
	I0224 00:56:27.554017  156119 network_create.go:107] docker network multinode-461512 192.168.58.0/24 created
	I0224 00:56:27.554044  156119 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-461512" container
	I0224 00:56:27.554110  156119 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0224 00:56:27.614859  156119 cli_runner.go:164] Run: docker volume create multinode-461512 --label name.minikube.sigs.k8s.io=multinode-461512 --label created_by.minikube.sigs.k8s.io=true
	I0224 00:56:27.677245  156119 oci.go:103] Successfully created a docker volume multinode-461512
	I0224 00:56:27.677350  156119 cli_runner.go:164] Run: docker run --rm --name multinode-461512-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-461512 --entrypoint /usr/bin/test -v multinode-461512:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0224 00:56:28.282586  156119 oci.go:107] Successfully prepared a docker volume multinode-461512
	I0224 00:56:28.282625  156119 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 00:56:28.282643  156119 kic.go:190] Starting extracting preloaded images to volume ...
	I0224 00:56:28.282700  156119 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-461512:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0224 00:56:33.221616  156119 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-461512:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (4.938879081s)
	I0224 00:56:33.221645  156119 kic.go:199] duration metric: took 4.938999 seconds to extract preloaded images to volume
	W0224 00:56:33.221783  156119 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0224 00:56:33.221873  156119 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0224 00:56:33.336975  156119 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-461512 --name multinode-461512 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-461512 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-461512 --network multinode-461512 --ip 192.168.58.2 --volume multinode-461512:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0224 00:56:33.766940  156119 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Running}}
	I0224 00:56:33.835216  156119 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Status}}
	I0224 00:56:33.903211  156119 cli_runner.go:164] Run: docker exec multinode-461512 stat /var/lib/dpkg/alternatives/iptables
	I0224 00:56:34.023451  156119 oci.go:144] the created container "multinode-461512" has a running status.
	I0224 00:56:34.023494  156119 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa...
	I0224 00:56:34.149337  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0224 00:56:34.149415  156119 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0224 00:56:34.267924  156119 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Status}}
	I0224 00:56:34.335216  156119 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0224 00:56:34.335249  156119 kic_runner.go:114] Args: [docker exec --privileged multinode-461512 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0224 00:56:34.447454  156119 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Status}}
	I0224 00:56:34.510265  156119 machine.go:88] provisioning docker machine ...
	I0224 00:56:34.510299  156119 ubuntu.go:169] provisioning hostname "multinode-461512"
	I0224 00:56:34.510355  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:34.570139  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:56:34.570585  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0224 00:56:34.570608  156119 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-461512 && echo "multinode-461512" | sudo tee /etc/hostname
	I0224 00:56:34.705086  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-461512
	
	I0224 00:56:34.705147  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:34.770868  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:56:34.771436  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0224 00:56:34.771466  156119 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-461512' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-461512/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-461512' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 00:56:34.901142  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 00:56:34.901175  156119 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15909-3785/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-3785/.minikube}
	I0224 00:56:34.901198  156119 ubuntu.go:177] setting up certificates
	I0224 00:56:34.901207  156119 provision.go:83] configureAuth start
	I0224 00:56:34.901267  156119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461512
	I0224 00:56:34.962515  156119 provision.go:138] copyHostCerts
	I0224 00:56:34.962556  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem
	I0224 00:56:34.962583  156119 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem, removing ...
	I0224 00:56:34.962593  156119 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem
	I0224 00:56:34.962663  156119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem (1123 bytes)
	I0224 00:56:34.962743  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem
	I0224 00:56:34.962766  156119 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem, removing ...
	I0224 00:56:34.962774  156119 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem
	I0224 00:56:34.962802  156119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem (1675 bytes)
	I0224 00:56:34.962872  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem
	I0224 00:56:34.962896  156119 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem, removing ...
	I0224 00:56:34.962905  156119 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem
	I0224 00:56:34.962938  156119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem (1078 bytes)
	I0224 00:56:34.962999  156119 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca-key.pem org=jenkins.multinode-461512 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-461512]
	I0224 00:56:35.092954  156119 provision.go:172] copyRemoteCerts
	I0224 00:56:35.093010  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 00:56:35.093040  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:35.156066  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:56:35.248801  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0224 00:56:35.248871  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 00:56:35.265353  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0224 00:56:35.265410  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0224 00:56:35.281364  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0224 00:56:35.281420  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 00:56:35.297448  156119 provision.go:86] duration metric: configureAuth took 396.225503ms
	I0224 00:56:35.297476  156119 ubuntu.go:193] setting minikube options for container-runtime
	I0224 00:56:35.297667  156119 config.go:182] Loaded profile config "multinode-461512": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 00:56:35.297721  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:35.362747  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:56:35.363328  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0224 00:56:35.363350  156119 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 00:56:35.493514  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 00:56:35.493534  156119 ubuntu.go:71] root file system type: overlay
	I0224 00:56:35.493657  156119 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 00:56:35.493710  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:35.558616  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:56:35.559156  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0224 00:56:35.559259  156119 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 00:56:35.697748  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 00:56:35.697810  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:35.760792  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:56:35.761192  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0224 00:56:35.761211  156119 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 00:56:36.378180  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 00:56:35.693574257 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0224 00:56:36.378209  156119 machine.go:91] provisioned docker machine in 1.867921549s
	I0224 00:56:36.378217  156119 client.go:171] LocalClient.Create took 9.107743068s
	I0224 00:56:36.378234  156119 start.go:167] duration metric: libmachine.API.Create for "multinode-461512" took 9.10779417s
	I0224 00:56:36.378241  156119 start.go:300] post-start starting for "multinode-461512" (driver="docker")
	I0224 00:56:36.378246  156119 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 00:56:36.378295  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 00:56:36.378328  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:36.439510  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:56:36.532927  156119 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 00:56:36.535317  156119 command_runner.go:130] > NAME="Ubuntu"
	I0224 00:56:36.535333  156119 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0224 00:56:36.535340  156119 command_runner.go:130] > ID=ubuntu
	I0224 00:56:36.535344  156119 command_runner.go:130] > ID_LIKE=debian
	I0224 00:56:36.535349  156119 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0224 00:56:36.535353  156119 command_runner.go:130] > VERSION_ID="20.04"
	I0224 00:56:36.535361  156119 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0224 00:56:36.535368  156119 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0224 00:56:36.535383  156119 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0224 00:56:36.535396  156119 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0224 00:56:36.535404  156119 command_runner.go:130] > VERSION_CODENAME=focal
	I0224 00:56:36.535412  156119 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0224 00:56:36.535464  156119 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 00:56:36.535479  156119 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 00:56:36.535487  156119 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 00:56:36.535493  156119 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 00:56:36.535501  156119 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3785/.minikube/addons for local assets ...
	I0224 00:56:36.535542  156119 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3785/.minikube/files for local assets ...
	I0224 00:56:36.535617  156119 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem -> 104702.pem in /etc/ssl/certs
	I0224 00:56:36.535627  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem -> /etc/ssl/certs/104702.pem
	I0224 00:56:36.535707  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 00:56:36.541717  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem --> /etc/ssl/certs/104702.pem (1708 bytes)
	I0224 00:56:36.557600  156119 start.go:303] post-start completed in 179.348653ms
	I0224 00:56:36.557909  156119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461512
	I0224 00:56:36.620493  156119 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/config.json ...
	I0224 00:56:36.620722  156119 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 00:56:36.620766  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:36.682430  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:56:36.769875  156119 command_runner.go:130] > 16%!
	(MISSING)I0224 00:56:36.769953  156119 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 00:56:36.773511  156119 command_runner.go:130] > 246G
	I0224 00:56:36.773535  156119 start.go:128] duration metric: createHost completed in 9.505589343s
	I0224 00:56:36.773545  156119 start.go:83] releasing machines lock for "multinode-461512", held for 9.505686385s
	I0224 00:56:36.773607  156119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461512
	I0224 00:56:36.838745  156119 ssh_runner.go:195] Run: cat /version.json
	I0224 00:56:36.838787  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:36.838841  156119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 00:56:36.838903  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:56:36.903175  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:56:36.905176  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:56:37.023088  156119 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0224 00:56:37.024457  156119 command_runner.go:130] > {"iso_version": "v1.29.0-1676397967-15752", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "1ecebb4330bc6283999d4ca9b3c62a9eeee8c692"}
	I0224 00:56:37.024572  156119 ssh_runner.go:195] Run: systemctl --version
	I0224 00:56:37.027986  156119 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0224 00:56:37.028005  156119 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0224 00:56:37.028050  156119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 00:56:37.031393  156119 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0224 00:56:37.031408  156119 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0224 00:56:37.031415  156119 command_runner.go:130] > Device: 34h/52d	Inode: 1319702     Links: 1
	I0224 00:56:37.031421  156119 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 00:56:37.031430  156119 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0224 00:56:37.031437  156119 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0224 00:56:37.031448  156119 command_runner.go:130] > Change: 2023-02-24 00:41:21.061607898 +0000
	I0224 00:56:37.031457  156119 command_runner.go:130] >  Birth: -
	I0224 00:56:37.031585  156119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0224 00:56:37.050031  156119 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0224 00:56:37.050186  156119 ssh_runner.go:195] Run: which cri-dockerd
	I0224 00:56:37.052699  156119 command_runner.go:130] > /usr/bin/cri-dockerd
	I0224 00:56:37.052809  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 00:56:37.059028  156119 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 00:56:37.070915  156119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 00:56:37.084892  156119 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0224 00:56:37.084915  156119 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0224 00:56:37.084925  156119 start.go:485] detecting cgroup driver to use...
	I0224 00:56:37.084950  156119 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 00:56:37.085032  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 00:56:37.095926  156119 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0224 00:56:37.095944  156119 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0224 00:56:37.096606  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 00:56:37.104062  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 00:56:37.111066  156119 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 00:56:37.111114  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 00:56:37.118164  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 00:56:37.124868  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 00:56:37.131719  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 00:56:37.138439  156119 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 00:56:37.144572  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 00:56:37.151902  156119 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 00:56:37.157204  156119 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0224 00:56:37.157753  156119 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 00:56:37.163514  156119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 00:56:37.230937  156119 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 00:56:37.299453  156119 start.go:485] detecting cgroup driver to use...
	I0224 00:56:37.299501  156119 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 00:56:37.299548  156119 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 00:56:37.307899  156119 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0224 00:56:37.308021  156119 command_runner.go:130] > [Unit]
	I0224 00:56:37.308042  156119 command_runner.go:130] > Description=Docker Application Container Engine
	I0224 00:56:37.308050  156119 command_runner.go:130] > Documentation=https://docs.docker.com
	I0224 00:56:37.308057  156119 command_runner.go:130] > BindsTo=containerd.service
	I0224 00:56:37.308074  156119 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0224 00:56:37.308085  156119 command_runner.go:130] > Wants=network-online.target
	I0224 00:56:37.308099  156119 command_runner.go:130] > Requires=docker.socket
	I0224 00:56:37.308107  156119 command_runner.go:130] > StartLimitBurst=3
	I0224 00:56:37.308115  156119 command_runner.go:130] > StartLimitIntervalSec=60
	I0224 00:56:37.308124  156119 command_runner.go:130] > [Service]
	I0224 00:56:37.308129  156119 command_runner.go:130] > Type=notify
	I0224 00:56:37.308138  156119 command_runner.go:130] > Restart=on-failure
	I0224 00:56:37.308149  156119 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0224 00:56:37.308168  156119 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0224 00:56:37.308182  156119 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0224 00:56:37.308196  156119 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0224 00:56:37.308212  156119 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0224 00:56:37.308222  156119 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0224 00:56:37.308233  156119 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0224 00:56:37.308246  156119 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0224 00:56:37.308261  156119 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0224 00:56:37.308269  156119 command_runner.go:130] > ExecStart=
	I0224 00:56:37.308294  156119 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0224 00:56:37.308306  156119 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0224 00:56:37.308317  156119 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0224 00:56:37.308330  156119 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0224 00:56:37.308340  156119 command_runner.go:130] > LimitNOFILE=infinity
	I0224 00:56:37.308350  156119 command_runner.go:130] > LimitNPROC=infinity
	I0224 00:56:37.308357  156119 command_runner.go:130] > LimitCORE=infinity
	I0224 00:56:37.308369  156119 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0224 00:56:37.308381  156119 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0224 00:56:37.308392  156119 command_runner.go:130] > TasksMax=infinity
	I0224 00:56:37.308401  156119 command_runner.go:130] > TimeoutStartSec=0
	I0224 00:56:37.308412  156119 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0224 00:56:37.308421  156119 command_runner.go:130] > Delegate=yes
	I0224 00:56:37.308430  156119 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0224 00:56:37.308441  156119 command_runner.go:130] > KillMode=process
	I0224 00:56:37.308457  156119 command_runner.go:130] > [Install]
	I0224 00:56:37.308467  156119 command_runner.go:130] > WantedBy=multi-user.target
	I0224 00:56:37.308846  156119 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 00:56:37.308906  156119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 00:56:37.317625  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 00:56:37.330783  156119 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 00:56:37.330801  156119 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 00:56:37.330846  156119 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 00:56:37.413769  156119 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 00:56:37.493630  156119 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 00:56:37.493664  156119 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 00:56:37.506311  156119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 00:56:37.587705  156119 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 00:56:37.780085  156119 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 00:56:37.859962  156119 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0224 00:56:37.860034  156119 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 00:56:37.931654  156119 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 00:56:38.003044  156119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 00:56:38.072134  156119 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 00:56:38.082148  156119 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 00:56:38.082204  156119 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 00:56:38.084772  156119 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0224 00:56:38.084794  156119 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0224 00:56:38.084801  156119 command_runner.go:130] > Device: 3fh/63d	Inode: 206         Links: 1
	I0224 00:56:38.084807  156119 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0224 00:56:38.084813  156119 command_runner.go:130] > Access: 2023-02-24 00:56:38.077813978 +0000
	I0224 00:56:38.084818  156119 command_runner.go:130] > Modify: 2023-02-24 00:56:38.077813978 +0000
	I0224 00:56:38.084822  156119 command_runner.go:130] > Change: 2023-02-24 00:56:38.077813978 +0000
	I0224 00:56:38.084826  156119 command_runner.go:130] >  Birth: -
	I0224 00:56:38.084877  156119 start.go:553] Will wait 60s for crictl version
	I0224 00:56:38.084927  156119 ssh_runner.go:195] Run: which crictl
	I0224 00:56:38.087254  156119 command_runner.go:130] > /usr/bin/crictl
	I0224 00:56:38.087305  156119 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 00:56:38.159679  156119 command_runner.go:130] > Version:  0.1.0
	I0224 00:56:38.159697  156119 command_runner.go:130] > RuntimeName:  docker
	I0224 00:56:38.159701  156119 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0224 00:56:38.159707  156119 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0224 00:56:38.161040  156119 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0224 00:56:38.161101  156119 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 00:56:38.181407  156119 command_runner.go:130] > 23.0.1
	I0224 00:56:38.181463  156119 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 00:56:38.200527  156119 command_runner.go:130] > 23.0.1
	I0224 00:56:38.204165  156119 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0224 00:56:38.204242  156119 cli_runner.go:164] Run: docker network inspect multinode-461512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 00:56:38.265920  156119 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0224 00:56:38.268985  156119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 00:56:38.277933  156119 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 00:56:38.277995  156119 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 00:56:38.293187  156119 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0224 00:56:38.293214  156119 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0224 00:56:38.293221  156119 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0224 00:56:38.293231  156119 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0224 00:56:38.293237  156119 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0224 00:56:38.293246  156119 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0224 00:56:38.293252  156119 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0224 00:56:38.293263  156119 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 00:56:38.294525  156119 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 00:56:38.294545  156119 docker.go:560] Images already preloaded, skipping extraction
	I0224 00:56:38.294595  156119 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 00:56:38.310945  156119 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0224 00:56:38.310963  156119 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0224 00:56:38.310968  156119 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0224 00:56:38.310978  156119 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0224 00:56:38.310982  156119 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0224 00:56:38.310988  156119 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0224 00:56:38.310995  156119 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0224 00:56:38.311006  156119 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 00:56:38.311038  156119 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 00:56:38.311049  156119 cache_images.go:84] Images are preloaded, skipping loading
	I0224 00:56:38.311090  156119 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 00:56:38.330852  156119 command_runner.go:130] > cgroupfs
	I0224 00:56:38.332033  156119 cni.go:84] Creating CNI manager for ""
	I0224 00:56:38.332048  156119 cni.go:136] 1 nodes found, recommending kindnet
	I0224 00:56:38.332062  156119 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 00:56:38.332089  156119 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-461512 NodeName:multinode-461512 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 00:56:38.332230  156119 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-461512"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 00:56:38.332315  156119 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-461512 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-461512 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 00:56:38.332364  156119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 00:56:38.338168  156119 command_runner.go:130] > kubeadm
	I0224 00:56:38.338183  156119 command_runner.go:130] > kubectl
	I0224 00:56:38.338186  156119 command_runner.go:130] > kubelet
	I0224 00:56:38.338735  156119 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 00:56:38.338786  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 00:56:38.344787  156119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0224 00:56:38.356127  156119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 00:56:38.367214  156119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0224 00:56:38.378556  156119 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0224 00:56:38.381008  156119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 00:56:38.389177  156119 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512 for IP: 192.168.58.2
	I0224 00:56:38.389209  156119 certs.go:186] acquiring lock for shared ca certs: {Name:mk4ccb66e3fb9104eb70d9107cb5563409a81019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:38.389322  156119 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.key
	I0224 00:56:38.389357  156119 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.key
	I0224 00:56:38.389393  156119 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key
	I0224 00:56:38.389404  156119 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt with IP's: []
	I0224 00:56:38.550905  156119 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt ...
	I0224 00:56:38.550929  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt: {Name:mkafd0f423e00282b1b80243bc87a0ef26cc5d80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:38.551073  156119 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key ...
	I0224 00:56:38.551084  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key: {Name:mk5a620a352449f2cb23b01bb46cef5a02dbb2e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:38.551151  156119 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.key.cee25041
	I0224 00:56:38.551164  156119 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0224 00:56:38.838168  156119 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.crt.cee25041 ...
	I0224 00:56:38.838194  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.crt.cee25041: {Name:mkb8322635e4298b3da32d32211030b8ff4d5117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:38.838330  156119 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.key.cee25041 ...
	I0224 00:56:38.838340  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.key.cee25041: {Name:mk4e888a9de7fdd8f3164b7a40013da92cef9186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:38.838401  156119 certs.go:333] copying /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.crt
	I0224 00:56:38.838461  156119 certs.go:337] copying /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.key
	I0224 00:56:38.838505  156119 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.key
	I0224 00:56:38.838517  156119 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.crt with IP's: []
	I0224 00:56:38.981872  156119 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.crt ...
	I0224 00:56:38.981900  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.crt: {Name:mkc6815434daf237e1887623e67b42e18f74a84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:38.982037  156119 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.key ...
	I0224 00:56:38.982046  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.key: {Name:mk8aa28fb7dd19a668557103ac8ed3108ce67ea9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:56:38.982122  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0224 00:56:38.982139  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0224 00:56:38.982148  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0224 00:56:38.982160  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0224 00:56:38.982169  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0224 00:56:38.982181  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0224 00:56:38.982193  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0224 00:56:38.982208  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0224 00:56:38.982264  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470.pem (1338 bytes)
	W0224 00:56:38.982300  156119 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470_empty.pem, impossibly tiny 0 bytes
	I0224 00:56:38.982310  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 00:56:38.982333  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem (1078 bytes)
	I0224 00:56:38.982361  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem (1123 bytes)
	I0224 00:56:38.982382  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem (1675 bytes)
	I0224 00:56:38.982418  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem (1708 bytes)
	I0224 00:56:38.982445  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem -> /usr/share/ca-certificates/104702.pem
	I0224 00:56:38.982459  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:56:38.982474  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470.pem -> /usr/share/ca-certificates/10470.pem
	I0224 00:56:38.982949  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 00:56:39.000389  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0224 00:56:39.016474  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 00:56:39.032358  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0224 00:56:39.048338  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 00:56:39.063738  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 00:56:39.079202  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 00:56:39.094289  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 00:56:39.109512  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem --> /usr/share/ca-certificates/104702.pem (1708 bytes)
	I0224 00:56:39.124994  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 00:56:39.140120  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470.pem --> /usr/share/ca-certificates/10470.pem (1338 bytes)
	I0224 00:56:39.155148  156119 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 00:56:39.166172  156119 ssh_runner.go:195] Run: openssl version
	I0224 00:56:39.170208  156119 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0224 00:56:39.170474  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 00:56:39.176912  156119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:56:39.179556  156119 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:56:39.179633  156119 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:56:39.179673  156119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:56:39.183599  156119 command_runner.go:130] > b5213941
	I0224 00:56:39.183715  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 00:56:39.189989  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10470.pem && ln -fs /usr/share/ca-certificates/10470.pem /etc/ssl/certs/10470.pem"
	I0224 00:56:39.196414  156119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10470.pem
	I0224 00:56:39.198952  156119 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 00:45 /usr/share/ca-certificates/10470.pem
	I0224 00:56:39.199009  156119 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:45 /usr/share/ca-certificates/10470.pem
	I0224 00:56:39.199036  156119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10470.pem
	I0224 00:56:39.202956  156119 command_runner.go:130] > 51391683
	I0224 00:56:39.203141  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10470.pem /etc/ssl/certs/51391683.0"
	I0224 00:56:39.209360  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/104702.pem && ln -fs /usr/share/ca-certificates/104702.pem /etc/ssl/certs/104702.pem"
	I0224 00:56:39.215931  156119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/104702.pem
	I0224 00:56:39.218533  156119 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 00:45 /usr/share/ca-certificates/104702.pem
	I0224 00:56:39.218651  156119 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:45 /usr/share/ca-certificates/104702.pem
	I0224 00:56:39.218687  156119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/104702.pem
	I0224 00:56:39.222902  156119 command_runner.go:130] > 3ec20f2e
	I0224 00:56:39.222946  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/104702.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 00:56:39.229292  156119 kubeadm.go:401] StartCluster: {Name:multinode-461512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-461512 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 00:56:39.229399  156119 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 00:56:39.245533  156119 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 00:56:39.251861  156119 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0224 00:56:39.251886  156119 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0224 00:56:39.251893  156119 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0224 00:56:39.251936  156119 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 00:56:39.258076  156119 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0224 00:56:39.258115  156119 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 00:56:39.264000  156119 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0224 00:56:39.264024  156119 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0224 00:56:39.264037  156119 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0224 00:56:39.264045  156119 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 00:56:39.264071  156119 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 00:56:39.264094  156119 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 00:56:39.301420  156119 kubeadm.go:322] W0224 00:56:39.300738    1404 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 00:56:39.301442  156119 command_runner.go:130] ! W0224 00:56:39.300738    1404 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 00:56:39.339556  156119 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1029-gcp\n", err: exit status 1
	I0224 00:56:39.339591  156119 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1029-gcp\n", err: exit status 1
	I0224 00:56:39.400297  156119 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 00:56:39.400324  156119 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 00:56:51.057939  156119 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0224 00:56:51.057962  156119 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0224 00:56:51.058021  156119 kubeadm.go:322] [preflight] Running pre-flight checks
	I0224 00:56:51.058083  156119 command_runner.go:130] > [preflight] Running pre-flight checks
	I0224 00:56:51.058218  156119 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0224 00:56:51.058232  156119 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0224 00:56:51.058303  156119 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1029-gcp
	I0224 00:56:51.058315  156119 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1029-gcp
	I0224 00:56:51.058371  156119 kubeadm.go:322] OS: Linux
	I0224 00:56:51.058383  156119 command_runner.go:130] > OS: Linux
	I0224 00:56:51.058440  156119 kubeadm.go:322] CGROUPS_CPU: enabled
	I0224 00:56:51.058451  156119 command_runner.go:130] > CGROUPS_CPU: enabled
	I0224 00:56:51.058514  156119 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0224 00:56:51.058525  156119 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0224 00:56:51.058586  156119 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0224 00:56:51.058604  156119 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0224 00:56:51.058667  156119 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0224 00:56:51.058682  156119 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0224 00:56:51.058747  156119 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0224 00:56:51.058757  156119 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0224 00:56:51.058823  156119 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0224 00:56:51.058833  156119 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0224 00:56:51.058892  156119 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0224 00:56:51.058906  156119 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0224 00:56:51.058973  156119 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0224 00:56:51.058982  156119 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0224 00:56:51.059043  156119 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0224 00:56:51.059054  156119 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0224 00:56:51.059145  156119 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 00:56:51.059158  156119 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 00:56:51.059278  156119 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 00:56:51.059289  156119 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 00:56:51.059413  156119 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 00:56:51.059424  156119 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 00:56:51.059503  156119 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 00:56:51.061269  156119 out.go:204]   - Generating certificates and keys ...
	I0224 00:56:51.059648  156119 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 00:56:51.061393  156119 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0224 00:56:51.061410  156119 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0224 00:56:51.061501  156119 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0224 00:56:51.061521  156119 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0224 00:56:51.061611  156119 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 00:56:51.061624  156119 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 00:56:51.061709  156119 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0224 00:56:51.061728  156119 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0224 00:56:51.061816  156119 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0224 00:56:51.061832  156119 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0224 00:56:51.061911  156119 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0224 00:56:51.061924  156119 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0224 00:56:51.062009  156119 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0224 00:56:51.062022  156119 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0224 00:56:51.062206  156119 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-461512] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 00:56:51.062225  156119 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-461512] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 00:56:51.062310  156119 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0224 00:56:51.062351  156119 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0224 00:56:51.062503  156119 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-461512] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 00:56:51.062512  156119 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-461512] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 00:56:51.062603  156119 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 00:56:51.062617  156119 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 00:56:51.062694  156119 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 00:56:51.062704  156119 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 00:56:51.062768  156119 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0224 00:56:51.062778  156119 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0224 00:56:51.062863  156119 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 00:56:51.062874  156119 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 00:56:51.062941  156119 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 00:56:51.062951  156119 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 00:56:51.063014  156119 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 00:56:51.063026  156119 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 00:56:51.063122  156119 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 00:56:51.063138  156119 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 00:56:51.063223  156119 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 00:56:51.063236  156119 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 00:56:51.063415  156119 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 00:56:51.063431  156119 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 00:56:51.063544  156119 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 00:56:51.063555  156119 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 00:56:51.063620  156119 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0224 00:56:51.063637  156119 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0224 00:56:51.063754  156119 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 00:56:51.065411  156119 out.go:204]   - Booting up control plane ...
	I0224 00:56:51.063794  156119 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 00:56:51.065526  156119 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 00:56:51.065540  156119 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 00:56:51.065627  156119 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 00:56:51.065646  156119 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 00:56:51.065732  156119 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 00:56:51.065743  156119 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 00:56:51.065846  156119 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 00:56:51.065857  156119 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 00:56:51.066012  156119 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 00:56:51.066043  156119 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 00:56:51.066186  156119 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002046 seconds
	I0224 00:56:51.066201  156119 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.002046 seconds
	I0224 00:56:51.066352  156119 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0224 00:56:51.066367  156119 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0224 00:56:51.066537  156119 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0224 00:56:51.066546  156119 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0224 00:56:51.066616  156119 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0224 00:56:51.066626  156119 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0224 00:56:51.066800  156119 kubeadm.go:322] [mark-control-plane] Marking the node multinode-461512 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0224 00:56:51.066808  156119 command_runner.go:130] > [mark-control-plane] Marking the node multinode-461512 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0224 00:56:51.066879  156119 kubeadm.go:322] [bootstrap-token] Using token: 7kk0e7.ephgzxkdwnb2txax
	I0224 00:56:51.068497  156119 out.go:204]   - Configuring RBAC rules ...
	I0224 00:56:51.066918  156119 command_runner.go:130] > [bootstrap-token] Using token: 7kk0e7.ephgzxkdwnb2txax
	I0224 00:56:51.068601  156119 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 00:56:51.068612  156119 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 00:56:51.068724  156119 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 00:56:51.068744  156119 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 00:56:51.068881  156119 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 00:56:51.068889  156119 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 00:56:51.069045  156119 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 00:56:51.069067  156119 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 00:56:51.069216  156119 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 00:56:51.069228  156119 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 00:56:51.069310  156119 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 00:56:51.069316  156119 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 00:56:51.069410  156119 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 00:56:51.069416  156119 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 00:56:51.069468  156119 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0224 00:56:51.069484  156119 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0224 00:56:51.069544  156119 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0224 00:56:51.069555  156119 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0224 00:56:51.069562  156119 kubeadm.go:322] 
	I0224 00:56:51.069626  156119 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0224 00:56:51.069634  156119 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0224 00:56:51.069638  156119 kubeadm.go:322] 
	I0224 00:56:51.069713  156119 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0224 00:56:51.069723  156119 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0224 00:56:51.069729  156119 kubeadm.go:322] 
	I0224 00:56:51.069767  156119 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0224 00:56:51.069778  156119 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0224 00:56:51.069851  156119 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 00:56:51.069860  156119 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 00:56:51.069924  156119 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 00:56:51.069934  156119 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 00:56:51.069943  156119 kubeadm.go:322] 
	I0224 00:56:51.070033  156119 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0224 00:56:51.070041  156119 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0224 00:56:51.070045  156119 kubeadm.go:322] 
	I0224 00:56:51.070128  156119 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 00:56:51.070145  156119 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 00:56:51.070156  156119 kubeadm.go:322] 
	I0224 00:56:51.070224  156119 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0224 00:56:51.070230  156119 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0224 00:56:51.070326  156119 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 00:56:51.070336  156119 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 00:56:51.070422  156119 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 00:56:51.070436  156119 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 00:56:51.070448  156119 kubeadm.go:322] 
	I0224 00:56:51.070536  156119 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0224 00:56:51.070542  156119 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0224 00:56:51.070625  156119 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0224 00:56:51.070640  156119 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0224 00:56:51.070650  156119 kubeadm.go:322] 
	I0224 00:56:51.070779  156119 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7kk0e7.ephgzxkdwnb2txax \
	I0224 00:56:51.070795  156119 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 7kk0e7.ephgzxkdwnb2txax \
	I0224 00:56:51.070928  156119 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bc80f60e14a6b9b559fc179e503c895fcccd0d05d03dee10e43de88c94ec0cb4 \
	I0224 00:56:51.070940  156119 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:bc80f60e14a6b9b559fc179e503c895fcccd0d05d03dee10e43de88c94ec0cb4 \
	I0224 00:56:51.070982  156119 kubeadm.go:322] 	--control-plane 
	I0224 00:56:51.070994  156119 command_runner.go:130] > 	--control-plane 
	I0224 00:56:51.071005  156119 kubeadm.go:322] 
	I0224 00:56:51.071117  156119 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0224 00:56:51.071129  156119 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0224 00:56:51.071135  156119 kubeadm.go:322] 
	I0224 00:56:51.071234  156119 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7kk0e7.ephgzxkdwnb2txax \
	I0224 00:56:51.071243  156119 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 7kk0e7.ephgzxkdwnb2txax \
	I0224 00:56:51.071323  156119 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bc80f60e14a6b9b559fc179e503c895fcccd0d05d03dee10e43de88c94ec0cb4 
	I0224 00:56:51.071330  156119 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:bc80f60e14a6b9b559fc179e503c895fcccd0d05d03dee10e43de88c94ec0cb4 
	I0224 00:56:51.071344  156119 cni.go:84] Creating CNI manager for ""
	I0224 00:56:51.071356  156119 cni.go:136] 1 nodes found, recommending kindnet
	I0224 00:56:51.072989  156119 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0224 00:56:51.074346  156119 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0224 00:56:51.077387  156119 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0224 00:56:51.077406  156119 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0224 00:56:51.077416  156119 command_runner.go:130] > Device: 34h/52d	Inode: 1317791     Links: 1
	I0224 00:56:51.077425  156119 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 00:56:51.077434  156119 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0224 00:56:51.077446  156119 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0224 00:56:51.077454  156119 command_runner.go:130] > Change: 2023-02-24 00:41:20.329534418 +0000
	I0224 00:56:51.077472  156119 command_runner.go:130] >  Birth: -
	I0224 00:56:51.077544  156119 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0224 00:56:51.077564  156119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0224 00:56:51.093145  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0224 00:56:51.754878  156119 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0224 00:56:51.760806  156119 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0224 00:56:51.766111  156119 command_runner.go:130] > serviceaccount/kindnet created
	I0224 00:56:51.776190  156119 command_runner.go:130] > daemonset.apps/kindnet created
	I0224 00:56:51.779605  156119 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 00:56:51.779732  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=c13299ce0b45f38f7f45d3bc31124c3ea59c0510 minikube.k8s.io/name=multinode-461512 minikube.k8s.io/updated_at=2023_02_24T00_56_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:51.779732  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:51.786415  156119 command_runner.go:130] > -16
	I0224 00:56:51.786479  156119 ops.go:34] apiserver oom_adj: -16
	I0224 00:56:51.868085  156119 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0224 00:56:51.868178  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:51.872766  156119 command_runner.go:130] > node/multinode-461512 labeled
	I0224 00:56:51.926103  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:52.429033  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:52.490103  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:52.928665  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:52.989441  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:53.429319  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:53.487454  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:53.928500  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:53.984992  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:54.428598  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:54.487697  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:54.929351  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:54.987839  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:55.428648  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:55.489701  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:55.929379  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:55.989542  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:56.429161  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:56.491917  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:56.929397  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:56.990258  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:57.428825  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:57.490396  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:57.929028  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:57.987311  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:58.429060  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:58.486546  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:58.928830  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:58.987208  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:59.429201  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:59.488299  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:56:59.929370  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:56:59.990335  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:00.428554  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:00.488248  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:00.928524  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:00.987544  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:01.429489  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:01.486969  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:01.929290  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:01.990759  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:02.428652  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:02.558544  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:02.929095  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:02.989541  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:03.428551  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:03.488262  156119 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 00:57:03.928613  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 00:57:03.992047  156119 command_runner.go:130] > NAME      SECRETS   AGE
	I0224 00:57:03.992076  156119 command_runner.go:130] > default   0         0s
	I0224 00:57:03.994465  156119 kubeadm.go:1073] duration metric: took 12.214785856s to wait for elevateKubeSystemPrivileges.
	I0224 00:57:03.994495  156119 kubeadm.go:403] StartCluster complete in 24.765210823s
	I0224 00:57:03.994512  156119 settings.go:142] acquiring lock: {Name:mkee07ffcb1920ada8b15d9b3d3940c229b3dfc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:57:03.994587  156119 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:57:03.995299  156119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3785/kubeconfig: {Name:mk3a4444ec91b5e085feb2b9897845e988f9c9bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:57:03.995495  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0224 00:57:03.995643  156119 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0224 00:57:03.995751  156119 addons.go:65] Setting storage-provisioner=true in profile "multinode-461512"
	I0224 00:57:03.995772  156119 addons.go:227] Setting addon storage-provisioner=true in "multinode-461512"
	I0224 00:57:03.995770  156119 addons.go:65] Setting default-storageclass=true in profile "multinode-461512"
	I0224 00:57:03.995803  156119 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-461512"
	I0224 00:57:03.995831  156119 host.go:66] Checking if "multinode-461512" exists ...
	I0224 00:57:03.995773  156119 config.go:182] Loaded profile config "multinode-461512": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 00:57:03.995872  156119 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:57:03.996157  156119 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Status}}
	I0224 00:57:03.996140  156119 kapi.go:59] client config for multinode-461512: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 00:57:03.996333  156119 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Status}}
	I0224 00:57:04.000853  156119 cert_rotation.go:137] Starting client certificate rotation controller
	I0224 00:57:04.001173  156119 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 00:57:04.001192  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:04.001204  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:04.001218  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:04.014777  156119 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0224 00:57:04.014799  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:04.014807  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:04.014813  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:04.014820  156119 round_trippers.go:580]     Content-Length: 291
	I0224 00:57:04.014831  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:04 GMT
	I0224 00:57:04.014839  156119 round_trippers.go:580]     Audit-Id: da8b48bc-15a0-4f51-a3fb-fa5179cd269a
	I0224 00:57:04.014852  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:04.014861  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:04.014893  156119 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ccf3ec87-77f6-42ea-8caa-6941529dafd4","resourceVersion":"354","creationTimestamp":"2023-02-24T00:56:50Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0224 00:57:04.015331  156119 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ccf3ec87-77f6-42ea-8caa-6941529dafd4","resourceVersion":"354","creationTimestamp":"2023-02-24T00:56:50Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0224 00:57:04.015370  156119 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 00:57:04.015375  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:04.015381  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:04.015387  156119 round_trippers.go:473]     Content-Type: application/json
	I0224 00:57:04.015393  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:04.021473  156119 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0224 00:57:04.021492  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:04.021500  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:04.021506  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:04.021511  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:04.021516  156119 round_trippers.go:580]     Content-Length: 291
	I0224 00:57:04.021521  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:04 GMT
	I0224 00:57:04.021527  156119 round_trippers.go:580]     Audit-Id: a5ce785d-eaf4-47d3-899b-884496bf15bc
	I0224 00:57:04.021533  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:04.021550  156119 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ccf3ec87-77f6-42ea-8caa-6941529dafd4","resourceVersion":"355","creationTimestamp":"2023-02-24T00:56:50Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0224 00:57:04.095148  156119 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:57:04.095363  156119 kapi.go:59] client config for multinode-461512: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 00:57:04.095631  156119 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0224 00:57:04.095637  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:04.095644  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:04.095652  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:04.100206  156119 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 00:57:04.098443  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:04.101631  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:04.101646  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:04.101656  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:04.101669  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:04.101679  156119 round_trippers.go:580]     Content-Length: 109
	I0224 00:57:04.101695  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:04 GMT
	I0224 00:57:04.101705  156119 round_trippers.go:580]     Audit-Id: 17d756db-f91a-416a-a957-67dd9a9e7055
	I0224 00:57:04.101718  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:04.101747  156119 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 00:57:04.101770  156119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0224 00:57:04.101821  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:57:04.101750  156119 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"364"},"items":[]}
	I0224 00:57:04.102214  156119 addons.go:227] Setting addon default-storageclass=true in "multinode-461512"
	I0224 00:57:04.102247  156119 host.go:66] Checking if "multinode-461512" exists ...
	I0224 00:57:04.102543  156119 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Status}}
	I0224 00:57:04.173040  156119 command_runner.go:130] > apiVersion: v1
	I0224 00:57:04.173066  156119 command_runner.go:130] > data:
	I0224 00:57:04.173092  156119 command_runner.go:130] >   Corefile: |
	I0224 00:57:04.173101  156119 command_runner.go:130] >     .:53 {
	I0224 00:57:04.173114  156119 command_runner.go:130] >         errors
	I0224 00:57:04.173122  156119 command_runner.go:130] >         health {
	I0224 00:57:04.173130  156119 command_runner.go:130] >            lameduck 5s
	I0224 00:57:04.173137  156119 command_runner.go:130] >         }
	I0224 00:57:04.173144  156119 command_runner.go:130] >         ready
	I0224 00:57:04.173153  156119 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0224 00:57:04.173165  156119 command_runner.go:130] >            pods insecure
	I0224 00:57:04.173173  156119 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0224 00:57:04.173183  156119 command_runner.go:130] >            ttl 30
	I0224 00:57:04.173190  156119 command_runner.go:130] >         }
	I0224 00:57:04.173197  156119 command_runner.go:130] >         prometheus :9153
	I0224 00:57:04.173205  156119 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0224 00:57:04.173213  156119 command_runner.go:130] >            max_concurrent 1000
	I0224 00:57:04.173219  156119 command_runner.go:130] >         }
	I0224 00:57:04.173226  156119 command_runner.go:130] >         cache 30
	I0224 00:57:04.173233  156119 command_runner.go:130] >         loop
	I0224 00:57:04.173239  156119 command_runner.go:130] >         reload
	I0224 00:57:04.173246  156119 command_runner.go:130] >         loadbalance
	I0224 00:57:04.173251  156119 command_runner.go:130] >     }
	I0224 00:57:04.173257  156119 command_runner.go:130] > kind: ConfigMap
	I0224 00:57:04.173264  156119 command_runner.go:130] > metadata:
	I0224 00:57:04.173276  156119 command_runner.go:130] >   creationTimestamp: "2023-02-24T00:56:50Z"
	I0224 00:57:04.173282  156119 command_runner.go:130] >   name: coredns
	I0224 00:57:04.173290  156119 command_runner.go:130] >   namespace: kube-system
	I0224 00:57:04.173296  156119 command_runner.go:130] >   resourceVersion: "233"
	I0224 00:57:04.173304  156119 command_runner.go:130] >   uid: 7fe2b65d-0034-4b86-8324-3680843f0957
	I0224 00:57:04.173501  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0224 00:57:04.231198  156119 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0224 00:57:04.231220  156119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0224 00:57:04.231262  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:57:04.234028  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:57:04.307570  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:57:04.448740  156119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 00:57:04.467408  156119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0224 00:57:04.522600  156119 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 00:57:04.522618  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:04.522626  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:04.522632  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:04.550666  156119 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0224 00:57:04.550692  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:04.550702  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:04 GMT
	I0224 00:57:04.550711  156119 round_trippers.go:580]     Audit-Id: e1198285-4281-4004-8a55-ba3728334db4
	I0224 00:57:04.550719  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:04.550728  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:04.550736  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:04.550744  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:04.550760  156119 round_trippers.go:580]     Content-Length: 291
	I0224 00:57:04.550791  156119 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ccf3ec87-77f6-42ea-8caa-6941529dafd4","resourceVersion":"364","creationTimestamp":"2023-02-24T00:56:50Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0224 00:57:04.550907  156119 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-461512" context rescaled to 1 replicas
	I0224 00:57:04.550938  156119 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 00:57:04.553823  156119 out.go:177] * Verifying Kubernetes components...
	I0224 00:57:04.555419  156119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 00:57:04.657342  156119 command_runner.go:130] > configmap/coredns replaced
	I0224 00:57:04.661992  156119 start.go:921] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0224 00:57:05.365630  156119 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0224 00:57:05.365715  156119 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0224 00:57:05.365737  156119 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0224 00:57:05.365757  156119 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0224 00:57:05.365785  156119 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0224 00:57:05.365812  156119 command_runner.go:130] > pod/storage-provisioner created
	I0224 00:57:05.365893  156119 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0224 00:57:05.367825  156119 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0224 00:57:05.366490  156119 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:57:05.369275  156119 addons.go:492] enable addons completed in 1.373630033s: enabled=[storage-provisioner default-storageclass]
	I0224 00:57:05.369486  156119 kapi.go:59] client config for multinode-461512: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 00:57:05.369704  156119 node_ready.go:35] waiting up to 6m0s for node "multinode-461512" to be "Ready" ...
	I0224 00:57:05.369755  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:05.369761  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:05.369769  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:05.369777  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:05.371500  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:05.371521  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:05.371530  156119 round_trippers.go:580]     Audit-Id: 327d98f5-d198-48ad-8f2b-a22ca674e747
	I0224 00:57:05.371539  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:05.371557  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:05.371573  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:05.371581  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:05.371595  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:05 GMT
	I0224 00:57:05.371694  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:05.372371  156119 node_ready.go:49] node "multinode-461512" has status "Ready":"True"
	I0224 00:57:05.372387  156119 node_ready.go:38] duration metric: took 2.669497ms waiting for node "multinode-461512" to be "Ready" ...
	I0224 00:57:05.372396  156119 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 00:57:05.372462  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0224 00:57:05.372472  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:05.372484  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:05.372496  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:05.375520  156119 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 00:57:05.375534  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:05.375540  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:05 GMT
	I0224 00:57:05.375546  156119 round_trippers.go:580]     Audit-Id: d51323f6-269f-4730-9b3a-6748ce95ebd6
	I0224 00:57:05.375551  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:05.375557  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:05.375562  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:05.375568  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:05.375875  156119 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"381"},"items":[{"metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"357","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2e
f201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 60467 chars]
	I0224 00:57:05.379460  156119 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:05.379560  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:05.379589  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:05.379617  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:05.379634  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:05.449393  156119 round_trippers.go:574] Response Status: 200 OK in 69 milliseconds
	I0224 00:57:05.449426  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:05.449436  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:05.449446  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:05.449460  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:05 GMT
	I0224 00:57:05.449474  156119 round_trippers.go:580]     Audit-Id: 59906891-2606-4784-beb9-1b83db7e30c1
	I0224 00:57:05.449492  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:05.449506  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:05.449638  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"357","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0224 00:57:05.450220  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:05.450270  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:05.450291  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:05.450311  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:05.452259  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:05.452283  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:05.452292  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:05.452320  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:05.452335  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:05.452350  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:05 GMT
	I0224 00:57:05.452364  156119 round_trippers.go:580]     Audit-Id: 5cf93540-2541-4c5c-9d68-40947dde9727
	I0224 00:57:05.452392  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:05.452560  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:05.953722  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:05.953749  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:05.953762  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:05.953771  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:05.956166  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:05.956190  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:05.956199  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:05 GMT
	I0224 00:57:05.956205  156119 round_trippers.go:580]     Audit-Id: 1b968c3b-85d7-4530-aeb7-eaca81036baf
	I0224 00:57:05.956211  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:05.956220  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:05.956231  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:05.956241  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:05.956333  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"357","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0224 00:57:05.956738  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:05.956748  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:05.956755  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:05.956761  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:05.958635  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:05.958652  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:05.958659  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:05.958664  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:05.958670  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:05.958676  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:05.958693  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:05 GMT
	I0224 00:57:05.958702  156119 round_trippers.go:580]     Audit-Id: a29a3197-0e31-4f4e-b593-b4e8a01e5316
	I0224 00:57:05.958795  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:06.454027  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:06.454049  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:06.454080  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:06.454089  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:06.457335  156119 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 00:57:06.457398  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:06.457421  156119 round_trippers.go:580]     Audit-Id: 8e0f50c7-1d0d-4309-8ab7-65bb946f9f6a
	I0224 00:57:06.457441  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:06.457465  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:06.457475  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:06.457485  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:06.457513  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:06 GMT
	I0224 00:57:06.457632  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"357","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0224 00:57:06.458247  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:06.458262  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:06.458274  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:06.458283  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:06.460054  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:06.460075  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:06.460085  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:06.460094  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:06.460103  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:06.460119  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:06 GMT
	I0224 00:57:06.460127  156119 round_trippers.go:580]     Audit-Id: 096acd8f-c7eb-4484-a660-37f04ab7ca8d
	I0224 00:57:06.460139  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:06.460252  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:06.953075  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:06.953093  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:06.953101  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:06.953108  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:06.954834  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:06.954853  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:06.954860  156119 round_trippers.go:580]     Audit-Id: d41074bd-aa8c-43bc-b383-7f3d6e27b665
	I0224 00:57:06.954867  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:06.954872  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:06.954877  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:06.954883  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:06.954889  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:06 GMT
	I0224 00:57:06.954996  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:06.955448  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:06.955463  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:06.955476  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:06.955491  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:06.957250  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:06.957273  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:06.957283  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:06.957293  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:06.957302  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:06.957312  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:06 GMT
	I0224 00:57:06.957324  156119 round_trippers.go:580]     Audit-Id: f9ca1c24-f027-463b-8cb9-bfcd6eca4fb0
	I0224 00:57:06.957334  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:06.957446  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:07.453059  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:07.453079  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:07.453087  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:07.453093  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:07.454891  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:07.454916  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:07.454924  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:07.454930  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:07.454939  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:07.454947  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:07.454960  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:07 GMT
	I0224 00:57:07.454970  156119 round_trippers.go:580]     Audit-Id: fe46c642-0845-48c6-bb77-26600ead4367
	I0224 00:57:07.455063  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:07.455586  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:07.455601  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:07.455613  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:07.455622  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:07.457140  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:07.457156  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:07.457163  156119 round_trippers.go:580]     Audit-Id: f3c372b4-9a75-4144-a15f-f52892ef7bc4
	I0224 00:57:07.457169  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:07.457176  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:07.457186  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:07.457196  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:07.457205  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:07 GMT
	I0224 00:57:07.457367  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:07.457667  156119 pod_ready.go:102] pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace has status "Ready":"False"
	I0224 00:57:07.954022  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:07.954044  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:07.954056  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:07.954082  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:07.955933  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:07.955957  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:07.955967  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:07.955977  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:07.955986  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:07.955995  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:07 GMT
	I0224 00:57:07.956006  156119 round_trippers.go:580]     Audit-Id: 41ed5086-9afc-4916-a3e1-44992d32fc6a
	I0224 00:57:07.956014  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:07.956162  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:07.956734  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:07.956748  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:07.956755  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:07.956761  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:07.958497  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:07.958514  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:07.958521  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:07.958527  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:07 GMT
	I0224 00:57:07.958533  156119 round_trippers.go:580]     Audit-Id: eab9681e-a99d-4ecf-bff6-6f44beb4097c
	I0224 00:57:07.958540  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:07.958548  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:07.958559  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:07.958656  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:08.453330  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:08.453352  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:08.453361  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:08.453368  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:08.455431  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:08.455451  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:08.455458  156119 round_trippers.go:580]     Audit-Id: 0d4533fd-ab86-475e-9909-a672a5af3d30
	I0224 00:57:08.455464  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:08.455469  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:08.455474  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:08.455483  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:08.455491  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:08 GMT
	I0224 00:57:08.455629  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:08.456123  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:08.456135  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:08.456142  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:08.456149  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:08.457627  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:08.457644  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:08.457650  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:08 GMT
	I0224 00:57:08.457656  156119 round_trippers.go:580]     Audit-Id: 90166fdf-366e-4886-a91f-21f9602e3879
	I0224 00:57:08.457662  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:08.457676  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:08.457684  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:08.457696  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:08.457813  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:08.953355  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:08.953375  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:08.953383  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:08.953389  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:08.955394  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:08.955418  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:08.955427  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:08 GMT
	I0224 00:57:08.955439  156119 round_trippers.go:580]     Audit-Id: 39ced71f-62d9-4a6c-8428-1c3c1396f33d
	I0224 00:57:08.955448  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:08.955457  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:08.955465  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:08.955479  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:08.955567  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:08.956038  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:08.956053  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:08.956063  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:08.956072  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:08.957695  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:08.957712  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:08.957722  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:08 GMT
	I0224 00:57:08.957732  156119 round_trippers.go:580]     Audit-Id: e2601c7b-9631-4702-aed0-d430378ff3c7
	I0224 00:57:08.957745  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:08.957751  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:08.957758  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:08.957764  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:08.957880  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:09.453369  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:09.453390  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:09.453403  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:09.453411  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:09.455436  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:09.455464  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:09.455476  156119 round_trippers.go:580]     Audit-Id: b51be5eb-29c5-489e-855e-afa50317332f
	I0224 00:57:09.455484  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:09.455491  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:09.455500  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:09.455515  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:09.455525  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:09 GMT
	I0224 00:57:09.455656  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:09.456227  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:09.456244  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:09.456253  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:09.456262  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:09.457714  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:09.457735  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:09.457744  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:09.457752  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:09 GMT
	I0224 00:57:09.457763  156119 round_trippers.go:580]     Audit-Id: a1883485-24fc-4eec-8e11-54351a2bcca8
	I0224 00:57:09.457772  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:09.457780  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:09.457790  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:09.457920  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:09.458304  156119 pod_ready.go:102] pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace has status "Ready":"False"
	I0224 00:57:09.953260  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:09.953300  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:09.953346  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:09.953356  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:09.955493  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:09.955515  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:09.955525  156119 round_trippers.go:580]     Audit-Id: 58c5585f-1de3-4dab-89b1-8079c6dbbdc0
	I0224 00:57:09.955531  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:09.955536  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:09.955542  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:09.955547  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:09.955553  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:09 GMT
	I0224 00:57:09.955642  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:09.956174  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:09.956194  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:09.956205  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:09.956214  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:09.957827  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:09.957845  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:09.957854  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:09.957862  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:09.957870  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:09.957879  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:09.957888  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:09 GMT
	I0224 00:57:09.957898  156119 round_trippers.go:580]     Audit-Id: 7cc230b0-4b42-43cd-bb83-06108a39273a
	I0224 00:57:09.957992  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:10.453534  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:10.453554  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:10.453562  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:10.453569  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:10.455781  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:10.455804  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:10.455815  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:10 GMT
	I0224 00:57:10.455822  156119 round_trippers.go:580]     Audit-Id: 0109268f-f1c9-4475-bb53-6c032bdca083
	I0224 00:57:10.455830  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:10.455842  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:10.455866  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:10.455878  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:10.455977  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:10.456485  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:10.456499  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:10.456510  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:10.456518  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:10.458118  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:10.458139  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:10.458150  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:10 GMT
	I0224 00:57:10.458158  156119 round_trippers.go:580]     Audit-Id: b35a3269-dce3-4b70-8680-770782bbd264
	I0224 00:57:10.458166  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:10.458176  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:10.458189  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:10.458199  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:10.458313  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:10.953981  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:10.954008  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:10.954021  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:10.954031  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:10.956500  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:10.956525  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:10.956535  156119 round_trippers.go:580]     Audit-Id: e824ed4b-8ed8-4feb-8fff-35594d2ea94a
	I0224 00:57:10.956543  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:10.956551  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:10.956559  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:10.956569  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:10.956577  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:10 GMT
	I0224 00:57:10.956689  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:10.957245  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:10.957261  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:10.957273  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:10.957282  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:10.959225  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:10.959250  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:10.959260  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:10.959272  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:10 GMT
	I0224 00:57:10.959286  156119 round_trippers.go:580]     Audit-Id: 56da9001-b0fe-4d34-9e44-3e94b13abbf4
	I0224 00:57:10.959295  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:10.959309  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:10.959319  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:10.959482  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:11.453072  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:11.453091  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:11.453099  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:11.453105  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:11.456655  156119 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 00:57:11.456678  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:11.456688  156119 round_trippers.go:580]     Audit-Id: 74c1c589-92bc-40d6-b9f9-83c125ad06ef
	I0224 00:57:11.456697  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:11.456706  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:11.456715  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:11.456724  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:11.456733  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:11 GMT
	I0224 00:57:11.456848  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:11.457430  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:11.457443  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:11.457455  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:11.457465  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:11.459551  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:11.459572  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:11.459582  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:11.459591  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:11.459609  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:11.459623  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:11.459644  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:11 GMT
	I0224 00:57:11.459652  156119 round_trippers.go:580]     Audit-Id: bcd61aa2-3aca-4122-8931-6a4d656927fc
	I0224 00:57:11.459769  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"309","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:11.460148  156119 pod_ready.go:102] pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace has status "Ready":"False"
	I0224 00:57:11.953478  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:11.953550  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:11.953583  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:11.953623  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:11.956110  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:11.956129  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:11.956137  156119 round_trippers.go:580]     Audit-Id: 749813bf-cb7f-4fdd-bf3f-ee531176b82d
	I0224 00:57:11.956146  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:11.956155  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:11.956167  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:11.956178  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:11.956190  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:11 GMT
	I0224 00:57:11.956323  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:11.956945  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:11.956969  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:11.956982  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:11.956992  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:11.958976  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:11.958992  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:11.959002  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:11.959011  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:11.959020  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:11 GMT
	I0224 00:57:11.959038  156119 round_trippers.go:580]     Audit-Id: e51b67ee-35fa-48ec-94be-e8e562a0c6a5
	I0224 00:57:11.959046  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:11.959054  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:11.959182  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:12.453907  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:12.453932  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:12.453944  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:12.453955  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:12.456269  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:12.456293  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:12.456313  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:12.456322  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:12.456330  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:12.456343  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:12.456354  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:12 GMT
	I0224 00:57:12.456363  156119 round_trippers.go:580]     Audit-Id: b091047b-a0cc-44ca-a39e-f769a199843b
	I0224 00:57:12.456468  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:12.456927  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:12.456940  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:12.456948  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:12.456954  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:12.458795  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:12.458817  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:12.458828  156119 round_trippers.go:580]     Audit-Id: 41c179b8-64d3-430c-a047-a283b4acbc5e
	I0224 00:57:12.458838  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:12.458847  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:12.458858  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:12.458871  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:12.458886  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:12 GMT
	I0224 00:57:12.459012  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:12.953194  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:12.953214  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:12.953225  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:12.953233  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:12.955523  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:12.955548  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:12.955559  156119 round_trippers.go:580]     Audit-Id: 8525e674-8e1e-42f5-bfc6-e8b3cab6a176
	I0224 00:57:12.955568  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:12.955576  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:12.955586  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:12.955598  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:12.955610  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:12 GMT
	I0224 00:57:12.955735  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:12.956273  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:12.956326  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:12.956348  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:12.956367  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:12.958382  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:12.958404  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:12.958414  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:12.958424  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:12.958433  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:12.958441  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:12 GMT
	I0224 00:57:12.958451  156119 round_trippers.go:580]     Audit-Id: 71a61bb0-c883-4497-a045-7361aedae0bc
	I0224 00:57:12.958487  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:12.958617  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:13.453118  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:13.453143  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:13.453156  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:13.453166  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:13.455439  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:13.455462  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:13.455473  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:13.455482  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:13 GMT
	I0224 00:57:13.455490  156119 round_trippers.go:580]     Audit-Id: b2f058e9-1627-4e5d-b58c-2820bfe7d73d
	I0224 00:57:13.455498  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:13.455510  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:13.455518  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:13.455633  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:13.456067  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:13.456078  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:13.456085  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:13.456091  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:13.458192  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:13.458211  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:13.458220  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:13.458230  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:13.458238  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:13 GMT
	I0224 00:57:13.458248  156119 round_trippers.go:580]     Audit-Id: 950e4308-6adb-4145-9538-f063164a5892
	I0224 00:57:13.458261  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:13.458273  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:13.458387  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:13.954032  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:13.954055  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:13.954090  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:13.954101  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:13.956464  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:13.956487  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:13.956498  156119 round_trippers.go:580]     Audit-Id: b556084c-91dc-464e-8969-7c6e774ad6f0
	I0224 00:57:13.956508  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:13.956516  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:13.956528  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:13.956540  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:13.956551  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:13 GMT
	I0224 00:57:13.956669  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:13.957226  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:13.957244  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:13.957256  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:13.957266  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:13.959299  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:13.959321  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:13.959330  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:13.959339  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:13.959347  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:13.959363  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:13 GMT
	I0224 00:57:13.959374  156119 round_trippers.go:580]     Audit-Id: e9386e4c-3f42-48d6-873e-78b66c96357d
	I0224 00:57:13.959386  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:13.959500  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:13.959800  156119 pod_ready.go:102] pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace has status "Ready":"False"
	I0224 00:57:14.453140  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:14.453164  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:14.453181  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:14.453192  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:14.455466  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:14.455486  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:14.455495  156119 round_trippers.go:580]     Audit-Id: bcc42f5c-a6fd-4e1b-8bda-4cb758dc51cf
	I0224 00:57:14.455505  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:14.455513  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:14.455520  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:14.455528  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:14.455547  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:14 GMT
	I0224 00:57:14.455702  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:14.456266  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:14.456283  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:14.456293  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:14.456302  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:14.459116  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:14.459141  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:14.459151  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:14.459161  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:14.459169  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:14 GMT
	I0224 00:57:14.459183  156119 round_trippers.go:580]     Audit-Id: 3c917c2c-dca5-4b1e-b07a-082a1835d89c
	I0224 00:57:14.459196  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:14.459208  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:14.459363  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:14.953933  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:14.953957  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:14.953968  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:14.953976  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:14.956522  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:14.956545  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:14.956554  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:14.956563  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:14.956572  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:14 GMT
	I0224 00:57:14.956584  156119 round_trippers.go:580]     Audit-Id: 6baa1850-317a-4c4f-8076-6e678e2fefd8
	I0224 00:57:14.956598  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:14.956607  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:14.956726  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:14.957311  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:14.957334  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:14.957345  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:14.957355  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:14.959332  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:14.959353  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:14.959363  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:14.959371  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:14.959380  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:14 GMT
	I0224 00:57:14.959390  156119 round_trippers.go:580]     Audit-Id: 94755ac3-f783-4793-b1e5-a9344ac31ec6
	I0224 00:57:14.959429  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:14.959442  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:14.959551  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:15.453634  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:15.453661  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:15.453674  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:15.453685  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:15.456276  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:15.456301  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:15.456313  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:15.456324  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:15.456332  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:15.456353  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:15 GMT
	I0224 00:57:15.456362  156119 round_trippers.go:580]     Audit-Id: af65e0ce-7f6c-489b-bf4e-7a40233a96d3
	I0224 00:57:15.456375  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:15.456507  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:15.457091  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:15.457111  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:15.457123  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:15.457133  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:15.459144  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:15.459163  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:15.459173  156119 round_trippers.go:580]     Audit-Id: 4ef12c62-20b9-42de-a90e-132b016f3e8b
	I0224 00:57:15.459182  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:15.459191  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:15.459200  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:15.459207  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:15.459216  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:15 GMT
	I0224 00:57:15.459333  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:15.953577  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:15.953601  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:15.953610  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:15.953620  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:15.956347  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:15.956372  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:15.956381  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:15 GMT
	I0224 00:57:15.956390  156119 round_trippers.go:580]     Audit-Id: 66e9356a-bf22-4e84-922f-a00973814444
	I0224 00:57:15.956399  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:15.956408  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:15.956430  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:15.956446  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:15.956595  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:15.957151  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:15.957171  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:15.957183  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:15.957194  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:15.958961  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:15.958981  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:15.958991  156119 round_trippers.go:580]     Audit-Id: 1d2319af-f646-4434-8306-063edd2d4ffc
	I0224 00:57:15.959001  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:15.959015  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:15.959024  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:15.959032  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:15.959044  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:15 GMT
	I0224 00:57:15.959157  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:16.453440  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:16.453459  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:16.453467  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:16.453474  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:16.455785  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:16.455806  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:16.455815  156119 round_trippers.go:580]     Audit-Id: 0e887c87-349f-4bde-8776-930bfa586a03
	I0224 00:57:16.455824  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:16.455833  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:16.455842  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:16.455854  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:16.455875  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:16 GMT
	I0224 00:57:16.455981  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:16.456513  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:16.456526  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:16.456536  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:16.456545  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:16.459586  156119 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 00:57:16.459605  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:16.459615  156119 round_trippers.go:580]     Audit-Id: efc0c02d-4ebf-44bd-88f3-59702d6edfc0
	I0224 00:57:16.459623  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:16.459632  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:16.459645  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:16.459655  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:16.459670  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:16 GMT
	I0224 00:57:16.459778  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:16.460062  156119 pod_ready.go:102] pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace has status "Ready":"False"
	I0224 00:57:16.953590  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:16.953613  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:16.953622  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:16.953628  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:16.955899  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:16.955967  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:16.955987  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:16.956005  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:16 GMT
	I0224 00:57:16.956038  156119 round_trippers.go:580]     Audit-Id: 7770f169-8a87-43b7-af94-527141d2ce91
	I0224 00:57:16.956059  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:16.956076  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:16.956092  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:16.956620  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:16.957170  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:16.957218  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:16.957235  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:16.957245  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:16.959246  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:16.959267  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:16.959278  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:16.959287  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:16.959298  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:16 GMT
	I0224 00:57:16.959311  156119 round_trippers.go:580]     Audit-Id: 88ab6be6-09f7-4b3d-90b7-a5f4979ec682
	I0224 00:57:16.959321  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:16.959332  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:16.959520  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:17.454124  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:17.454154  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:17.454167  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:17.454178  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:17.456390  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:17.456414  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:17.456424  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:17.456433  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:17.456442  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:17.456454  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:17 GMT
	I0224 00:57:17.456463  156119 round_trippers.go:580]     Audit-Id: ccd62a61-7dc2-4af2-8b28-9a129fdec264
	I0224 00:57:17.456473  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:17.456582  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:17.457135  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:17.457147  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:17.457158  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:17.457168  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:17.459106  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:17.459127  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:17.459136  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:17.459144  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:17.459152  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:17.459162  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:17.459173  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:17 GMT
	I0224 00:57:17.459181  156119 round_trippers.go:580]     Audit-Id: d737f9d3-6d3f-43c2-9105-1bc36798607b
	I0224 00:57:17.459291  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:17.953984  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:17.954007  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:17.954018  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:17.954024  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:17.956274  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:17.956295  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:17.956304  156119 round_trippers.go:580]     Audit-Id: 2def9cfe-c66f-4b3b-9fdd-072622ced7ef
	I0224 00:57:17.956313  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:17.956321  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:17.956337  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:17.956346  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:17.956356  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:17 GMT
	I0224 00:57:17.956523  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:17.957067  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:17.957082  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:17.957092  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:17.957103  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:17.958803  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:17.958823  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:17.958832  156119 round_trippers.go:580]     Audit-Id: 0b77223c-ff2e-4c56-8d53-98233cf04262
	I0224 00:57:17.958840  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:17.958849  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:17.958860  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:17.958871  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:17.958883  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:17 GMT
	I0224 00:57:17.959016  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:18.453684  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:18.453708  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:18.453720  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:18.453731  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:18.456025  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:18.456051  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:18.456061  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:18.456069  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:18.456078  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:18.456087  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:18.456102  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:18 GMT
	I0224 00:57:18.456109  156119 round_trippers.go:580]     Audit-Id: 80dea813-20b4-4081-9e9a-0fa8968fc217
	I0224 00:57:18.456225  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:18.456653  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:18.456667  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:18.456676  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:18.456685  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:18.458643  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:18.458664  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:18.458674  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:18.458682  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:18.458691  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:18.458703  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:18.458715  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:18 GMT
	I0224 00:57:18.458726  156119 round_trippers.go:580]     Audit-Id: de267d16-690a-4304-8a9b-08d52cdc8a43
	I0224 00:57:18.458839  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:18.953454  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:18.953477  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:18.953489  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:18.953504  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:18.955900  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:18.955920  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:18.955929  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:18 GMT
	I0224 00:57:18.955938  156119 round_trippers.go:580]     Audit-Id: a6f30687-aec1-4019-9257-87f017c9d840
	I0224 00:57:18.955948  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:18.955962  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:18.955972  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:18.955981  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:18.956101  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-9ws7r","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d","resourceVersion":"402","creationTimestamp":"2023-02-24T00:57:03Z","deletionTimestamp":"2023-02-24T00:57:34Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0224 00:57:18.956581  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:18.956593  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:18.956600  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:18.956606  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:18.958358  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:18.958377  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:18.958388  156119 round_trippers.go:580]     Audit-Id: 31b6c82c-57f5-4409-94fd-e14781b76fca
	I0224 00:57:18.958398  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:18.958408  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:18.958420  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:18.958433  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:18.958442  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:18 GMT
	I0224 00:57:18.958556  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:18.958980  156119 pod_ready.go:102] pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace has status "Ready":"False"
	I0224 00:57:19.453127  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-9ws7r
	I0224 00:57:19.453153  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:19.453163  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:19.453170  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:19.454911  156119 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0224 00:57:19.454943  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:19.454953  156119 round_trippers.go:580]     Audit-Id: 55fb85b8-43bb-4bdf-a24e-2c53cc59bd49
	I0224 00:57:19.454962  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:19.454973  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:19.454984  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:19.454996  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:19.455007  156119 round_trippers.go:580]     Content-Length: 216
	I0224 00:57:19.455018  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:19 GMT
	I0224 00:57:19.455048  156119 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-9ws7r\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-9ws7r","kind":"pods"},"code":404}
	I0224 00:57:19.455267  156119 pod_ready.go:97] error getting pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-9ws7r" not found
	I0224 00:57:19.455292  156119 pod_ready.go:81] duration metric: took 14.075778884s waiting for pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace to be "Ready" ...
	E0224 00:57:19.455307  156119 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-9ws7r" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-9ws7r" not found
	I0224 00:57:19.455322  156119 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-r6m7z" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:19.455383  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-r6m7z
	I0224 00:57:19.455394  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:19.455404  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:19.455415  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:19.457963  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:19.457983  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:19.457993  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:19.458007  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:19.458017  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:19.458030  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:19 GMT
	I0224 00:57:19.458039  156119 round_trippers.go:580]     Audit-Id: 9953caaf-fe6a-42df-a8d7-43f5756e281d
	I0224 00:57:19.458050  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:19.458179  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-r6m7z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8c8eb92c-c99a-4eea-8518-bd2bac5df023","resourceVersion":"406","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 00:57:19.458668  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:19.458682  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:19.458689  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:19.458695  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:19.460106  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:19.460124  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:19.460134  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:19.460140  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:19 GMT
	I0224 00:57:19.460149  156119 round_trippers.go:580]     Audit-Id: ec911c26-4334-44c8-869e-df2b63401210
	I0224 00:57:19.460159  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:19.460172  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:19.460184  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:19.460290  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:19.960915  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-r6m7z
	I0224 00:57:19.960935  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:19.960943  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:19.960950  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:19.962952  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:19.962984  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:19.962992  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:19.962998  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:19.963003  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:19.963009  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:19 GMT
	I0224 00:57:19.963014  156119 round_trippers.go:580]     Audit-Id: 50f08eb7-5ee1-411e-98d1-fc3376c2b760
	I0224 00:57:19.963019  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:19.963117  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-r6m7z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8c8eb92c-c99a-4eea-8518-bd2bac5df023","resourceVersion":"406","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 00:57:19.963565  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:19.963579  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:19.963586  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:19.963592  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:19.965187  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:19.965209  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:19.965222  156119 round_trippers.go:580]     Audit-Id: 8b1a7939-cbe2-4be6-921b-50808a4dd1f3
	I0224 00:57:19.965231  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:19.965239  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:19.965247  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:19.965260  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:19.965271  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:19 GMT
	I0224 00:57:19.965387  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:20.460898  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-r6m7z
	I0224 00:57:20.460917  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.460925  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.460932  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.462894  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.462912  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.462919  156119 round_trippers.go:580]     Audit-Id: a64fb2fd-9a90-4112-8d82-60f641dc06a0
	I0224 00:57:20.462925  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.462930  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.462938  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.462946  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.462954  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.463039  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-r6m7z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8c8eb92c-c99a-4eea-8518-bd2bac5df023","resourceVersion":"433","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0224 00:57:20.463484  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:20.463499  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.463506  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.463513  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.465178  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.465193  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.465203  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.465212  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.465224  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.465235  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.465243  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.465249  156119 round_trippers.go:580]     Audit-Id: 750aded8-6b14-48fc-9d3d-559b51d9f4a8
	I0224 00:57:20.465344  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:20.465622  156119 pod_ready.go:92] pod "coredns-787d4945fb-r6m7z" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:20.465643  156119 pod_ready.go:81] duration metric: took 1.010309087s waiting for pod "coredns-787d4945fb-r6m7z" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.465651  156119 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.465689  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-461512
	I0224 00:57:20.465696  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.465702  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.465711  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.467217  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.467233  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.467240  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.467246  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.467251  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.467257  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.467265  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.467276  156119 round_trippers.go:580]     Audit-Id: b70e0a44-ad01-45fc-b0c4-bdd1c866483f
	I0224 00:57:20.467391  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-461512","namespace":"kube-system","uid":"85634add-ee6f-426e-8dce-c5bd503ada85","resourceVersion":"279","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"755375775ca4908a1a35224e40dd8da8","kubernetes.io/config.mirror":"755375775ca4908a1a35224e40dd8da8","kubernetes.io/config.seen":"2023-02-24T00:56:50.894583011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0224 00:57:20.467726  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:20.467737  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.467744  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.467750  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.469177  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.469196  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.469206  156119 round_trippers.go:580]     Audit-Id: 00991078-94e8-4d4b-9997-20dd395be4a8
	I0224 00:57:20.469215  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.469223  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.469234  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.469245  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.469258  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.469358  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:20.469618  156119 pod_ready.go:92] pod "etcd-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:20.469629  156119 pod_ready.go:81] duration metric: took 3.970892ms waiting for pod "etcd-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.469641  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.469675  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-461512
	I0224 00:57:20.469682  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.469688  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.469694  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.471104  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.471127  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.471137  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.471146  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.471162  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.471171  156119 round_trippers.go:580]     Audit-Id: 00f81bf6-077f-4272-ad5d-34e595caecf2
	I0224 00:57:20.471183  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.471195  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.471303  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-461512","namespace":"kube-system","uid":"915d077c-7a17-4c95-9199-8146800a171b","resourceVersion":"382","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"4c6cb11c2c301f276f12bb7545f0af61","kubernetes.io/config.mirror":"4c6cb11c2c301f276f12bb7545f0af61","kubernetes.io/config.seen":"2023-02-24T00:56:50.894613111Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0224 00:57:20.471667  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:20.471679  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.471685  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.471692  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.472935  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.472951  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.472960  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.472968  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.472976  156119 round_trippers.go:580]     Audit-Id: a85d0139-2ebc-4a3d-87c2-c760977905be
	I0224 00:57:20.472988  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.473002  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.473015  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.473118  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:20.473376  156119 pod_ready.go:92] pod "kube-apiserver-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:20.473387  156119 pod_ready.go:81] duration metric: took 3.740685ms waiting for pod "kube-apiserver-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.473395  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.473427  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-461512
	I0224 00:57:20.473434  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.473440  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.473451  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.474866  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.474884  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.474893  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.474902  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.474914  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.474923  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.474935  156119 round_trippers.go:580]     Audit-Id: e3ba6523-f36b-47a6-9780-847e53a3000e
	I0224 00:57:20.474947  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.475049  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-461512","namespace":"kube-system","uid":"8e426bcd-dab9-430d-b166-f7ab34013208","resourceVersion":"274","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c1d525744bc3189fa4b6ceed33e9b7b6","kubernetes.io/config.mirror":"c1d525744bc3189fa4b6ceed33e9b7b6","kubernetes.io/config.seen":"2023-02-24T00:56:50.894614692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0224 00:57:20.475355  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:20.475364  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.475371  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.475377  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.476548  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.476562  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.476569  156119 round_trippers.go:580]     Audit-Id: 6d5afb8d-1de1-4df0-a1c8-a0bccd3b815b
	I0224 00:57:20.476578  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.476593  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.476605  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.476618  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.476629  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.476691  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:20.476909  156119 pod_ready.go:92] pod "kube-controller-manager-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:20.476918  156119 pod_ready.go:81] duration metric: took 3.518277ms waiting for pod "kube-controller-manager-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.476924  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dvmbp" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.476954  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvmbp
	I0224 00:57:20.476961  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.476968  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.476974  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.478193  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.478211  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.478220  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.478229  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.478241  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.478252  156119 round_trippers.go:580]     Audit-Id: fee2a3c3-73ac-4117-a14c-0dc80a1c7e5b
	I0224 00:57:20.478263  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.478275  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.478360  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dvmbp","generateName":"kube-proxy-","namespace":"kube-system","uid":"e9e9bac2-7132-4b60-a535-80b6113e0e8d","resourceVersion":"392","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ac4eac56-21ca-4f1f-a0d6-df82bff382f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ac4eac56-21ca-4f1f-a0d6-df82bff382f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0224 00:57:20.478690  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:20.478702  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.478709  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.478715  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.479868  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.479883  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.479889  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.479895  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.479901  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.479910  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.479922  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.479931  156119 round_trippers.go:580]     Audit-Id: 4601d3e4-cc6d-4956-bf2b-277f6786a542
	I0224 00:57:20.480034  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:20.480276  156119 pod_ready.go:92] pod "kube-proxy-dvmbp" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:20.480289  156119 pod_ready.go:81] duration metric: took 3.359473ms waiting for pod "kube-proxy-dvmbp" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.480299  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.661666  156119 request.go:622] Waited for 181.310592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-461512
	I0224 00:57:20.661707  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-461512
	I0224 00:57:20.661711  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.661719  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.661728  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.663380  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.663399  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.663408  156119 round_trippers.go:580]     Audit-Id: c0e2ce02-f018-4a9a-bfa0-44745e4544fb
	I0224 00:57:20.663417  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.663427  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.663449  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.663462  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.663471  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.663553  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-461512","namespace":"kube-system","uid":"64f3ef30-ed87-42cc-b0e2-cd3c7c922383","resourceVersion":"280","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6d86c9f2cb44969723080e3b260936ff","kubernetes.io/config.mirror":"6d86c9f2cb44969723080e3b260936ff","kubernetes.io/config.seen":"2023-02-24T00:56:50.894615981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0224 00:57:20.861228  156119 request.go:622] Waited for 197.349951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:20.861288  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:20.861295  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.861304  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.861311  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.863076  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:20.863096  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.863105  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.863113  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.863121  156119 round_trippers.go:580]     Audit-Id: f64a2596-fe22-458d-afaf-5f8873e56ad1
	I0224 00:57:20.863130  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.863154  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.863166  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.863248  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0224 00:57:20.863541  156119 pod_ready.go:92] pod "kube-scheduler-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:20.863566  156119 pod_ready.go:81] duration metric: took 383.260366ms waiting for pod "kube-scheduler-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:20.863580  156119 pod_ready.go:38] duration metric: took 15.491172291s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 00:57:20.863608  156119 api_server.go:51] waiting for apiserver process to appear ...
	I0224 00:57:20.863654  156119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 00:57:20.872462  156119 command_runner.go:130] > 2043
	I0224 00:57:20.873050  156119 api_server.go:71] duration metric: took 16.322084904s to wait for apiserver process to appear ...
	I0224 00:57:20.873068  156119 api_server.go:87] waiting for apiserver healthz status ...
	I0224 00:57:20.873079  156119 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0224 00:57:20.876781  156119 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0224 00:57:20.876822  156119 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0224 00:57:20.876830  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:20.876838  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:20.876844  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:20.877498  156119 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0224 00:57:20.877513  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:20.877520  156119 round_trippers.go:580]     Content-Length: 263
	I0224 00:57:20.877525  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:20 GMT
	I0224 00:57:20.877531  156119 round_trippers.go:580]     Audit-Id: 755f3a52-a8c5-4941-9a59-7e14cde38318
	I0224 00:57:20.877538  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:20.877550  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:20.877562  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:20.877574  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:20.877592  156119 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0224 00:57:20.877657  156119 api_server.go:140] control plane version: v1.26.1
	I0224 00:57:20.877671  156119 api_server.go:130] duration metric: took 4.597635ms to wait for apiserver health ...
	I0224 00:57:20.877679  156119 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 00:57:21.060995  156119 request.go:622] Waited for 183.255895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0224 00:57:21.061048  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0224 00:57:21.061053  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:21.061065  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:21.061072  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:21.064181  156119 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 00:57:21.064204  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:21.064211  156119 round_trippers.go:580]     Audit-Id: 0b5c1d7a-9759-461b-8451-9d12c1a71646
	I0224 00:57:21.064217  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:21.064222  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:21.064236  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:21.064244  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:21.064250  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:21 GMT
	I0224 00:57:21.064666  156119 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"coredns-787d4945fb-r6m7z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8c8eb92c-c99a-4eea-8518-bd2bac5df023","resourceVersion":"433","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0224 00:57:21.066423  156119 system_pods.go:59] 8 kube-system pods found
	I0224 00:57:21.066444  156119 system_pods.go:61] "coredns-787d4945fb-r6m7z" [8c8eb92c-c99a-4eea-8518-bd2bac5df023] Running
	I0224 00:57:21.066451  156119 system_pods.go:61] "etcd-multinode-461512" [85634add-ee6f-426e-8dce-c5bd503ada85] Running
	I0224 00:57:21.066462  156119 system_pods.go:61] "kindnet-5p4bl" [5b593525-bd00-43d2-8402-71e8fd30a4ef] Running
	I0224 00:57:21.066470  156119 system_pods.go:61] "kube-apiserver-multinode-461512" [915d077c-7a17-4c95-9199-8146800a171b] Running
	I0224 00:57:21.066481  156119 system_pods.go:61] "kube-controller-manager-multinode-461512" [8e426bcd-dab9-430d-b166-f7ab34013208] Running
	I0224 00:57:21.066488  156119 system_pods.go:61] "kube-proxy-dvmbp" [e9e9bac2-7132-4b60-a535-80b6113e0e8d] Running
	I0224 00:57:21.066493  156119 system_pods.go:61] "kube-scheduler-multinode-461512" [64f3ef30-ed87-42cc-b0e2-cd3c7c922383] Running
	I0224 00:57:21.066499  156119 system_pods.go:61] "storage-provisioner" [82115459-afa2-425c-a8bc-9da99885c6ae] Running
	I0224 00:57:21.066503  156119 system_pods.go:74] duration metric: took 188.820667ms to wait for pod list to return data ...
	I0224 00:57:21.066512  156119 default_sa.go:34] waiting for default service account to be created ...
	I0224 00:57:21.261977  156119 request.go:622] Waited for 195.394959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0224 00:57:21.262057  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0224 00:57:21.262089  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:21.262102  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:21.262113  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:21.264245  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:21.264272  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:21.264282  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:21.264288  156119 round_trippers.go:580]     Content-Length: 261
	I0224 00:57:21.264294  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:21 GMT
	I0224 00:57:21.264303  156119 round_trippers.go:580]     Audit-Id: c62a4440-1953-4578-b1ba-eb610c0bab2a
	I0224 00:57:21.264309  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:21.264317  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:21.264340  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:21.264371  156119 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f47f03da-df5a-4d85-b75b-77af9a8736c4","resourceVersion":"339","creationTimestamp":"2023-02-24T00:57:03Z"}}]}
	I0224 00:57:21.264571  156119 default_sa.go:45] found service account: "default"
	I0224 00:57:21.264588  156119 default_sa.go:55] duration metric: took 198.067894ms for default service account to be created ...
	I0224 00:57:21.264598  156119 system_pods.go:116] waiting for k8s-apps to be running ...
	I0224 00:57:21.460925  156119 request.go:622] Waited for 196.262808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0224 00:57:21.460986  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0224 00:57:21.460998  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:21.461006  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:21.461013  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:21.465441  156119 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 00:57:21.465462  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:21.465469  156119 round_trippers.go:580]     Audit-Id: b2c74b27-22a7-4901-8b8b-7d9af07e9f84
	I0224 00:57:21.465475  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:21.465482  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:21.465491  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:21.465503  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:21.465511  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:21 GMT
	I0224 00:57:21.465930  156119 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438"},"items":[{"metadata":{"name":"coredns-787d4945fb-r6m7z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8c8eb92c-c99a-4eea-8518-bd2bac5df023","resourceVersion":"433","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0224 00:57:21.467576  156119 system_pods.go:86] 8 kube-system pods found
	I0224 00:57:21.467594  156119 system_pods.go:89] "coredns-787d4945fb-r6m7z" [8c8eb92c-c99a-4eea-8518-bd2bac5df023] Running
	I0224 00:57:21.467599  156119 system_pods.go:89] "etcd-multinode-461512" [85634add-ee6f-426e-8dce-c5bd503ada85] Running
	I0224 00:57:21.467603  156119 system_pods.go:89] "kindnet-5p4bl" [5b593525-bd00-43d2-8402-71e8fd30a4ef] Running
	I0224 00:57:21.467607  156119 system_pods.go:89] "kube-apiserver-multinode-461512" [915d077c-7a17-4c95-9199-8146800a171b] Running
	I0224 00:57:21.467613  156119 system_pods.go:89] "kube-controller-manager-multinode-461512" [8e426bcd-dab9-430d-b166-f7ab34013208] Running
	I0224 00:57:21.467619  156119 system_pods.go:89] "kube-proxy-dvmbp" [e9e9bac2-7132-4b60-a535-80b6113e0e8d] Running
	I0224 00:57:21.467630  156119 system_pods.go:89] "kube-scheduler-multinode-461512" [64f3ef30-ed87-42cc-b0e2-cd3c7c922383] Running
	I0224 00:57:21.467636  156119 system_pods.go:89] "storage-provisioner" [82115459-afa2-425c-a8bc-9da99885c6ae] Running
	I0224 00:57:21.467642  156119 system_pods.go:126] duration metric: took 203.038059ms to wait for k8s-apps to be running ...
	I0224 00:57:21.467650  156119 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 00:57:21.467688  156119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 00:57:21.477136  156119 system_svc.go:56] duration metric: took 9.480308ms WaitForService to wait for kubelet.
	I0224 00:57:21.477158  156119 kubeadm.go:578] duration metric: took 16.926191742s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0224 00:57:21.477179  156119 node_conditions.go:102] verifying NodePressure condition ...
	I0224 00:57:21.661570  156119 request.go:622] Waited for 184.324772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0224 00:57:21.661617  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0224 00:57:21.661622  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:21.661629  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:21.661637  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:21.663630  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:21.663648  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:21.663655  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:21.663661  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:21.663667  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:21 GMT
	I0224 00:57:21.663672  156119 round_trippers.go:580]     Audit-Id: ff45d116-a5ff-4ef3-83b9-4f576977529e
	I0224 00:57:21.663678  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:21.663684  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:21.663765  156119 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"415","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5052 chars]
	I0224 00:57:21.664572  156119 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0224 00:57:21.664599  156119 node_conditions.go:123] node cpu capacity is 8
	I0224 00:57:21.664611  156119 node_conditions.go:105] duration metric: took 187.427166ms to run NodePressure ...
	I0224 00:57:21.664624  156119 start.go:228] waiting for startup goroutines ...
	I0224 00:57:21.664634  156119 start.go:233] waiting for cluster config update ...
	I0224 00:57:21.664651  156119 start.go:242] writing updated cluster config ...
	I0224 00:57:21.667301  156119 out.go:177] 
	I0224 00:57:21.668878  156119 config.go:182] Loaded profile config "multinode-461512": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 00:57:21.668957  156119 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/config.json ...
	I0224 00:57:21.670818  156119 out.go:177] * Starting worker node multinode-461512-m02 in cluster multinode-461512
	I0224 00:57:21.672112  156119 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 00:57:21.673508  156119 out.go:177] * Pulling base image ...
	I0224 00:57:21.675167  156119 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 00:57:21.675188  156119 cache.go:57] Caching tarball of preloaded images
	I0224 00:57:21.675191  156119 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 00:57:21.675271  156119 preload.go:174] Found /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 00:57:21.675287  156119 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 00:57:21.675391  156119 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/config.json ...
	I0224 00:57:21.740022  156119 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 00:57:21.740046  156119 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 00:57:21.740068  156119 cache.go:193] Successfully downloaded all kic artifacts
	I0224 00:57:21.740101  156119 start.go:364] acquiring machines lock for multinode-461512-m02: {Name:mk0c24cecb0f2bb7442eab1def0480438fceaed3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 00:57:21.740198  156119 start.go:368] acquired machines lock for "multinode-461512-m02" in 79.668µs
	I0224 00:57:21.740221  156119 start.go:93] Provisioning new machine with config: &{Name:multinode-461512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-461512 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0224 00:57:21.740296  156119 start.go:125] createHost starting for "m02" (driver="docker")
	I0224 00:57:21.742330  156119 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0224 00:57:21.742446  156119 start.go:159] libmachine.API.Create for "multinode-461512" (driver="docker")
	I0224 00:57:21.742474  156119 client.go:168] LocalClient.Create starting
	I0224 00:57:21.742557  156119 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem
	I0224 00:57:21.742595  156119 main.go:141] libmachine: Decoding PEM data...
	I0224 00:57:21.742616  156119 main.go:141] libmachine: Parsing certificate...
	I0224 00:57:21.742669  156119 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem
	I0224 00:57:21.742690  156119 main.go:141] libmachine: Decoding PEM data...
	I0224 00:57:21.742699  156119 main.go:141] libmachine: Parsing certificate...
	I0224 00:57:21.742882  156119 cli_runner.go:164] Run: docker network inspect multinode-461512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 00:57:21.803637  156119 network_create.go:76] Found existing network {name:multinode-461512 subnet:0xc00137e270 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0224 00:57:21.803671  156119 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-461512-m02" container
	I0224 00:57:21.803721  156119 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0224 00:57:21.864339  156119 cli_runner.go:164] Run: docker volume create multinode-461512-m02 --label name.minikube.sigs.k8s.io=multinode-461512-m02 --label created_by.minikube.sigs.k8s.io=true
	I0224 00:57:21.925959  156119 oci.go:103] Successfully created a docker volume multinode-461512-m02
	I0224 00:57:21.926036  156119 cli_runner.go:164] Run: docker run --rm --name multinode-461512-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-461512-m02 --entrypoint /usr/bin/test -v multinode-461512-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0224 00:57:22.523715  156119 oci.go:107] Successfully prepared a docker volume multinode-461512-m02
	I0224 00:57:22.523755  156119 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 00:57:22.523774  156119 kic.go:190] Starting extracting preloaded images to volume ...
	I0224 00:57:22.523826  156119 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-461512-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0224 00:57:27.366312  156119 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-461512-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (4.842437991s)
	I0224 00:57:27.366338  156119 kic.go:199] duration metric: took 4.842561 seconds to extract preloaded images to volume
	W0224 00:57:27.366475  156119 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0224 00:57:27.366588  156119 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0224 00:57:27.484704  156119 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-461512-m02 --name multinode-461512-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-461512-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-461512-m02 --network multinode-461512 --ip 192.168.58.3 --volume multinode-461512-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0224 00:57:27.913966  156119 cli_runner.go:164] Run: docker container inspect multinode-461512-m02 --format={{.State.Running}}
	I0224 00:57:27.980030  156119 cli_runner.go:164] Run: docker container inspect multinode-461512-m02 --format={{.State.Status}}
	I0224 00:57:28.048591  156119 cli_runner.go:164] Run: docker exec multinode-461512-m02 stat /var/lib/dpkg/alternatives/iptables
	I0224 00:57:28.168673  156119 oci.go:144] the created container "multinode-461512-m02" has a running status.
	I0224 00:57:28.168709  156119 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa...
	I0224 00:57:28.247375  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0224 00:57:28.247417  156119 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0224 00:57:28.371424  156119 cli_runner.go:164] Run: docker container inspect multinode-461512-m02 --format={{.State.Status}}
	I0224 00:57:28.442342  156119 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0224 00:57:28.442367  156119 kic_runner.go:114] Args: [docker exec --privileged multinode-461512-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0224 00:57:28.556209  156119 cli_runner.go:164] Run: docker container inspect multinode-461512-m02 --format={{.State.Status}}
	I0224 00:57:28.620388  156119 machine.go:88] provisioning docker machine ...
	I0224 00:57:28.620421  156119 ubuntu.go:169] provisioning hostname "multinode-461512-m02"
	I0224 00:57:28.620479  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:28.683699  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:57:28.684119  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0224 00:57:28.684133  156119 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-461512-m02 && echo "multinode-461512-m02" | sudo tee /etc/hostname
	I0224 00:57:28.821854  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-461512-m02
	
	I0224 00:57:28.821928  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:28.883891  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:57:28.884325  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0224 00:57:28.884343  156119 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-461512-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-461512-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-461512-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 00:57:29.017350  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 00:57:29.017377  156119 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15909-3785/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-3785/.minikube}
	I0224 00:57:29.017390  156119 ubuntu.go:177] setting up certificates
	I0224 00:57:29.017397  156119 provision.go:83] configureAuth start
	I0224 00:57:29.017443  156119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461512-m02
	I0224 00:57:29.082607  156119 provision.go:138] copyHostCerts
	I0224 00:57:29.082649  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem
	I0224 00:57:29.082675  156119 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem, removing ...
	I0224 00:57:29.082684  156119 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem
	I0224 00:57:29.082743  156119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-3785/.minikube/ca.pem (1078 bytes)
	I0224 00:57:29.082807  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem
	I0224 00:57:29.082826  156119 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem, removing ...
	I0224 00:57:29.082833  156119 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem
	I0224 00:57:29.082855  156119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-3785/.minikube/cert.pem (1123 bytes)
	I0224 00:57:29.082895  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem
	I0224 00:57:29.082911  156119 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem, removing ...
	I0224 00:57:29.082917  156119 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem
	I0224 00:57:29.082935  156119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-3785/.minikube/key.pem (1675 bytes)
	I0224 00:57:29.082977  156119 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca-key.pem org=jenkins.multinode-461512-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-461512-m02]
	I0224 00:57:29.384338  156119 provision.go:172] copyRemoteCerts
	I0224 00:57:29.384393  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 00:57:29.384423  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:29.447725  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa Username:docker}
	I0224 00:57:29.541343  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0224 00:57:29.541398  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 00:57:29.558035  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0224 00:57:29.558112  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0224 00:57:29.574265  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0224 00:57:29.574309  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 00:57:29.589986  156119 provision.go:86] duration metric: configureAuth took 572.577934ms
	I0224 00:57:29.590008  156119 ubuntu.go:193] setting minikube options for container-runtime
	I0224 00:57:29.590178  156119 config.go:182] Loaded profile config "multinode-461512": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 00:57:29.590223  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:29.652331  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:57:29.652777  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0224 00:57:29.652791  156119 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 00:57:29.781589  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 00:57:29.781613  156119 ubuntu.go:71] root file system type: overlay
	I0224 00:57:29.781744  156119 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 00:57:29.781807  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:29.844419  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:57:29.844870  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0224 00:57:29.844933  156119 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 00:57:29.986246  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 00:57:29.986309  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:30.049834  156119 main.go:141] libmachine: Using SSH client type: native
	I0224 00:57:30.050268  156119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0224 00:57:30.050289  156119 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 00:57:30.687544  156119 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 00:57:29.979032345 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0224 00:57:30.687580  156119 machine.go:91] provisioned docker machine in 2.067171191s
	I0224 00:57:30.687592  156119 client.go:171] LocalClient.Create took 8.945109003s
	I0224 00:57:30.687611  156119 start.go:167] duration metric: libmachine.API.Create for "multinode-461512" took 8.945165168s
	I0224 00:57:30.687620  156119 start.go:300] post-start starting for "multinode-461512-m02" (driver="docker")
	I0224 00:57:30.687629  156119 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 00:57:30.687699  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 00:57:30.687750  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:30.752342  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa Username:docker}
	I0224 00:57:30.844917  156119 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 00:57:30.847316  156119 command_runner.go:130] > NAME="Ubuntu"
	I0224 00:57:30.847332  156119 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0224 00:57:30.847336  156119 command_runner.go:130] > ID=ubuntu
	I0224 00:57:30.847341  156119 command_runner.go:130] > ID_LIKE=debian
	I0224 00:57:30.847347  156119 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0224 00:57:30.847351  156119 command_runner.go:130] > VERSION_ID="20.04"
	I0224 00:57:30.847356  156119 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0224 00:57:30.847361  156119 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0224 00:57:30.847366  156119 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0224 00:57:30.847377  156119 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0224 00:57:30.847382  156119 command_runner.go:130] > VERSION_CODENAME=focal
	I0224 00:57:30.847385  156119 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0224 00:57:30.847447  156119 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 00:57:30.847461  156119 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 00:57:30.847469  156119 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 00:57:30.847479  156119 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 00:57:30.847489  156119 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3785/.minikube/addons for local assets ...
	I0224 00:57:30.847529  156119 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3785/.minikube/files for local assets ...
	I0224 00:57:30.847587  156119 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem -> 104702.pem in /etc/ssl/certs
	I0224 00:57:30.847596  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem -> /etc/ssl/certs/104702.pem
	I0224 00:57:30.847670  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 00:57:30.853775  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem --> /etc/ssl/certs/104702.pem (1708 bytes)
	I0224 00:57:30.869542  156119 start.go:303] post-start completed in 181.911089ms
	I0224 00:57:30.869862  156119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461512-m02
	I0224 00:57:30.931451  156119 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/config.json ...
	I0224 00:57:30.931708  156119 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 00:57:30.931749  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:30.993353  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa Username:docker}
	I0224 00:57:31.081629  156119 command_runner.go:130] > 16%!
	(MISSING)I0224 00:57:31.081908  156119 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 00:57:31.085246  156119 command_runner.go:130] > 245G
	I0224 00:57:31.085430  156119 start.go:128] duration metric: createHost completed in 9.345126362s
	I0224 00:57:31.085446  156119 start.go:83] releasing machines lock for "multinode-461512-m02", held for 9.345235208s
	I0224 00:57:31.085505  156119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461512-m02
	I0224 00:57:31.150235  156119 out.go:177] * Found network options:
	I0224 00:57:31.151641  156119 out.go:177]   - NO_PROXY=192.168.58.2
	W0224 00:57:31.152933  156119 proxy.go:119] fail to check proxy env: Error ip not in block
	W0224 00:57:31.152972  156119 proxy.go:119] fail to check proxy env: Error ip not in block
	I0224 00:57:31.153040  156119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 00:57:31.153083  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:31.153101  156119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 00:57:31.153157  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:57:31.222012  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa Username:docker}
	I0224 00:57:31.223120  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa Username:docker}
	I0224 00:57:31.346553  156119 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0224 00:57:31.347703  156119 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0224 00:57:31.347719  156119 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0224 00:57:31.347725  156119 command_runner.go:130] > Device: c5h/197d	Inode: 1319702     Links: 1
	I0224 00:57:31.347745  156119 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 00:57:31.347757  156119 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0224 00:57:31.347768  156119 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0224 00:57:31.347776  156119 command_runner.go:130] > Change: 2023-02-24 00:41:21.061607898 +0000
	I0224 00:57:31.347782  156119 command_runner.go:130] >  Birth: -
	I0224 00:57:31.347834  156119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0224 00:57:31.366229  156119 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0224 00:57:31.366285  156119 ssh_runner.go:195] Run: which cri-dockerd
	I0224 00:57:31.368696  156119 command_runner.go:130] > /usr/bin/cri-dockerd
	I0224 00:57:31.368888  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 00:57:31.374837  156119 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 00:57:31.386217  156119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 00:57:31.399774  156119 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0224 00:57:31.399831  156119 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0224 00:57:31.399848  156119 start.go:485] detecting cgroup driver to use...
	I0224 00:57:31.399871  156119 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 00:57:31.399959  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 00:57:31.410925  156119 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0224 00:57:31.410946  156119 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0224 00:57:31.411946  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 00:57:31.419796  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 00:57:31.426753  156119 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 00:57:31.426798  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 00:57:31.433598  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 00:57:31.440432  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 00:57:31.447234  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 00:57:31.453987  156119 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 00:57:31.460171  156119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 00:57:31.466867  156119 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 00:57:31.471985  156119 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0224 00:57:31.472480  156119 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 00:57:31.478215  156119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 00:57:31.547538  156119 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 00:57:31.618255  156119 start.go:485] detecting cgroup driver to use...
	I0224 00:57:31.618305  156119 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 00:57:31.618357  156119 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 00:57:31.628809  156119 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0224 00:57:31.628832  156119 command_runner.go:130] > [Unit]
	I0224 00:57:31.628844  156119 command_runner.go:130] > Description=Docker Application Container Engine
	I0224 00:57:31.628854  156119 command_runner.go:130] > Documentation=https://docs.docker.com
	I0224 00:57:31.628861  156119 command_runner.go:130] > BindsTo=containerd.service
	I0224 00:57:31.628871  156119 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0224 00:57:31.628882  156119 command_runner.go:130] > Wants=network-online.target
	I0224 00:57:31.628892  156119 command_runner.go:130] > Requires=docker.socket
	I0224 00:57:31.628902  156119 command_runner.go:130] > StartLimitBurst=3
	I0224 00:57:31.628912  156119 command_runner.go:130] > StartLimitIntervalSec=60
	I0224 00:57:31.628921  156119 command_runner.go:130] > [Service]
	I0224 00:57:31.628931  156119 command_runner.go:130] > Type=notify
	I0224 00:57:31.628941  156119 command_runner.go:130] > Restart=on-failure
	I0224 00:57:31.628954  156119 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0224 00:57:31.628969  156119 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0224 00:57:31.628983  156119 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0224 00:57:31.629003  156119 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0224 00:57:31.629018  156119 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0224 00:57:31.629032  156119 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0224 00:57:31.629045  156119 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0224 00:57:31.629061  156119 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0224 00:57:31.629080  156119 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0224 00:57:31.629094  156119 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0224 00:57:31.629104  156119 command_runner.go:130] > ExecStart=
	I0224 00:57:31.629130  156119 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0224 00:57:31.629148  156119 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0224 00:57:31.629158  156119 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0224 00:57:31.629172  156119 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0224 00:57:31.629182  156119 command_runner.go:130] > LimitNOFILE=infinity
	I0224 00:57:31.629192  156119 command_runner.go:130] > LimitNPROC=infinity
	I0224 00:57:31.629199  156119 command_runner.go:130] > LimitCORE=infinity
	I0224 00:57:31.629213  156119 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0224 00:57:31.629225  156119 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0224 00:57:31.629235  156119 command_runner.go:130] > TasksMax=infinity
	I0224 00:57:31.629244  156119 command_runner.go:130] > TimeoutStartSec=0
	I0224 00:57:31.629257  156119 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0224 00:57:31.629264  156119 command_runner.go:130] > Delegate=yes
	I0224 00:57:31.629287  156119 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0224 00:57:31.629298  156119 command_runner.go:130] > KillMode=process
	I0224 00:57:31.629308  156119 command_runner.go:130] > [Install]
	I0224 00:57:31.629319  156119 command_runner.go:130] > WantedBy=multi-user.target
	I0224 00:57:31.629344  156119 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 00:57:31.629395  156119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 00:57:31.638296  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 00:57:31.650466  156119 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 00:57:31.650494  156119 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 00:57:31.651341  156119 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 00:57:31.758941  156119 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 00:57:31.845154  156119 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 00:57:31.845186  156119 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 00:57:31.860738  156119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 00:57:31.940338  156119 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 00:57:32.136014  156119 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 00:57:32.209062  156119 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0224 00:57:32.209129  156119 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 00:57:32.286125  156119 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 00:57:32.361521  156119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 00:57:32.441481  156119 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 00:57:32.452095  156119 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 00:57:32.452144  156119 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 00:57:32.454825  156119 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0224 00:57:32.454846  156119 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0224 00:57:32.454856  156119 command_runner.go:130] > Device: ceh/206d	Inode: 206         Links: 1
	I0224 00:57:32.454866  156119 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0224 00:57:32.454877  156119 command_runner.go:130] > Access: 2023-02-24 00:57:32.443280111 +0000
	I0224 00:57:32.454890  156119 command_runner.go:130] > Modify: 2023-02-24 00:57:32.443280111 +0000
	I0224 00:57:32.454906  156119 command_runner.go:130] > Change: 2023-02-24 00:57:32.447280514 +0000
	I0224 00:57:32.454913  156119 command_runner.go:130] >  Birth: -
	I0224 00:57:32.454927  156119 start.go:553] Will wait 60s for crictl version
	I0224 00:57:32.454969  156119 ssh_runner.go:195] Run: which crictl
	I0224 00:57:32.457330  156119 command_runner.go:130] > /usr/bin/crictl
	I0224 00:57:32.457479  156119 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 00:57:32.532332  156119 command_runner.go:130] > Version:  0.1.0
	I0224 00:57:32.532353  156119 command_runner.go:130] > RuntimeName:  docker
	I0224 00:57:32.532358  156119 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0224 00:57:32.532363  156119 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0224 00:57:32.532381  156119 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0224 00:57:32.532420  156119 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 00:57:32.551686  156119 command_runner.go:130] > 23.0.1
	I0224 00:57:32.552625  156119 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 00:57:32.571374  156119 command_runner.go:130] > 23.0.1
	I0224 00:57:32.574917  156119 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0224 00:57:32.576410  156119 out.go:177]   - env NO_PROXY=192.168.58.2
	I0224 00:57:32.577911  156119 cli_runner.go:164] Run: docker network inspect multinode-461512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 00:57:32.641068  156119 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0224 00:57:32.644187  156119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 00:57:32.653326  156119 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512 for IP: 192.168.58.3
	I0224 00:57:32.653360  156119 certs.go:186] acquiring lock for shared ca certs: {Name:mk4ccb66e3fb9104eb70d9107cb5563409a81019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 00:57:32.653502  156119 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.key
	I0224 00:57:32.653551  156119 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.key
	I0224 00:57:32.653573  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0224 00:57:32.653592  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0224 00:57:32.653605  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0224 00:57:32.653621  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0224 00:57:32.653689  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470.pem (1338 bytes)
	W0224 00:57:32.653729  156119 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470_empty.pem, impossibly tiny 0 bytes
	I0224 00:57:32.653744  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 00:57:32.653780  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/ca.pem (1078 bytes)
	I0224 00:57:32.653810  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/cert.pem (1123 bytes)
	I0224 00:57:32.653841  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/home/jenkins/minikube-integration/15909-3785/.minikube/certs/key.pem (1675 bytes)
	I0224 00:57:32.653900  156119 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem (1708 bytes)
	I0224 00:57:32.653933  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470.pem -> /usr/share/ca-certificates/10470.pem
	I0224 00:57:32.653953  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem -> /usr/share/ca-certificates/104702.pem
	I0224 00:57:32.653971  156119 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:57:32.654351  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 00:57:32.671022  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 00:57:32.687023  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 00:57:32.704568  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 00:57:32.720792  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/certs/10470.pem --> /usr/share/ca-certificates/10470.pem (1338 bytes)
	I0224 00:57:32.736669  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/ssl/certs/104702.pem --> /usr/share/ca-certificates/104702.pem (1708 bytes)
	I0224 00:57:32.751862  156119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 00:57:32.767766  156119 ssh_runner.go:195] Run: openssl version
	I0224 00:57:32.771845  156119 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0224 00:57:32.772107  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10470.pem && ln -fs /usr/share/ca-certificates/10470.pem /etc/ssl/certs/10470.pem"
	I0224 00:57:32.778681  156119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10470.pem
	I0224 00:57:32.781540  156119 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 00:45 /usr/share/ca-certificates/10470.pem
	I0224 00:57:32.781594  156119 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:45 /usr/share/ca-certificates/10470.pem
	I0224 00:57:32.781627  156119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10470.pem
	I0224 00:57:32.786013  156119 command_runner.go:130] > 51391683
	I0224 00:57:32.786224  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10470.pem /etc/ssl/certs/51391683.0"
	I0224 00:57:32.792555  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/104702.pem && ln -fs /usr/share/ca-certificates/104702.pem /etc/ssl/certs/104702.pem"
	I0224 00:57:32.799554  156119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/104702.pem
	I0224 00:57:32.802124  156119 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 00:45 /usr/share/ca-certificates/104702.pem
	I0224 00:57:32.802247  156119 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:45 /usr/share/ca-certificates/104702.pem
	I0224 00:57:32.802329  156119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/104702.pem
	I0224 00:57:32.806261  156119 command_runner.go:130] > 3ec20f2e
	I0224 00:57:32.806397  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/104702.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 00:57:32.812732  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 00:57:32.819108  156119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:57:32.821882  156119 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:57:32.821913  156119 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:57:32.821939  156119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 00:57:32.826159  156119 command_runner.go:130] > b5213941
	I0224 00:57:32.826196  156119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 00:57:32.832571  156119 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 00:57:32.855037  156119 command_runner.go:130] > cgroupfs
	I0224 00:57:32.855091  156119 cni.go:84] Creating CNI manager for ""
	I0224 00:57:32.855103  156119 cni.go:136] 2 nodes found, recommending kindnet
	I0224 00:57:32.855117  156119 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 00:57:32.855141  156119 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-461512 NodeName:multinode-461512-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 00:57:32.855273  156119 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-461512-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 00:57:32.855344  156119 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-461512-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-461512 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 00:57:32.855397  156119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 00:57:32.861684  156119 command_runner.go:130] > kubeadm
	I0224 00:57:32.861697  156119 command_runner.go:130] > kubectl
	I0224 00:57:32.861701  156119 command_runner.go:130] > kubelet
	I0224 00:57:32.862318  156119 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 00:57:32.862375  156119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0224 00:57:32.868604  156119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0224 00:57:32.880424  156119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 00:57:32.891986  156119 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0224 00:57:32.894555  156119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 00:57:32.902817  156119 host.go:66] Checking if "multinode-461512" exists ...
	I0224 00:57:32.903035  156119 config.go:182] Loaded profile config "multinode-461512": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 00:57:32.903027  156119 start.go:301] JoinCluster: &{Name:multinode-461512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-461512 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 00:57:32.903094  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0224 00:57:32.903126  156119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:57:32.965906  156119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:57:33.108115  156119 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ioyax0.yxmw6naapbho79wq --discovery-token-ca-cert-hash sha256:bc80f60e14a6b9b559fc179e503c895fcccd0d05d03dee10e43de88c94ec0cb4 
	I0224 00:57:33.108169  156119 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0224 00:57:33.108198  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ioyax0.yxmw6naapbho79wq --discovery-token-ca-cert-hash sha256:bc80f60e14a6b9b559fc179e503c895fcccd0d05d03dee10e43de88c94ec0cb4 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-461512-m02"
	I0224 00:57:33.142214  156119 command_runner.go:130] > [preflight] Running pre-flight checks
	I0224 00:57:33.165745  156119 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0224 00:57:33.165770  156119 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1029-gcp
	I0224 00:57:33.165783  156119 command_runner.go:130] > OS: Linux
	I0224 00:57:33.165791  156119 command_runner.go:130] > CGROUPS_CPU: enabled
	I0224 00:57:33.165804  156119 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0224 00:57:33.165811  156119 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0224 00:57:33.165816  156119 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0224 00:57:33.165826  156119 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0224 00:57:33.165834  156119 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0224 00:57:33.165840  156119 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0224 00:57:33.165845  156119 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0224 00:57:33.165850  156119 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0224 00:57:33.241499  156119 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0224 00:57:33.241531  156119 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0224 00:57:33.266172  156119 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 00:57:33.266256  156119 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 00:57:33.266271  156119 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0224 00:57:33.349062  156119 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0224 00:57:34.868204  156119 command_runner.go:130] > This node has joined the cluster:
	I0224 00:57:34.868233  156119 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0224 00:57:34.868243  156119 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0224 00:57:34.868254  156119 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0224 00:57:34.870639  156119 command_runner.go:130] ! W0224 00:57:33.141898    1336 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 00:57:34.870668  156119 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1029-gcp\n", err: exit status 1
	I0224 00:57:34.870680  156119 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 00:57:34.870704  156119 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ioyax0.yxmw6naapbho79wq --discovery-token-ca-cert-hash sha256:bc80f60e14a6b9b559fc179e503c895fcccd0d05d03dee10e43de88c94ec0cb4 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-461512-m02": (1.762488468s)
	I0224 00:57:34.870727  156119 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0224 00:57:35.060590  156119 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0224 00:57:35.060621  156119 start.go:303] JoinCluster complete in 2.157594183s
	I0224 00:57:35.060633  156119 cni.go:84] Creating CNI manager for ""
	I0224 00:57:35.060637  156119 cni.go:136] 2 nodes found, recommending kindnet
	I0224 00:57:35.060676  156119 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0224 00:57:35.063963  156119 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0224 00:57:35.063985  156119 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0224 00:57:35.063992  156119 command_runner.go:130] > Device: 34h/52d	Inode: 1317791     Links: 1
	I0224 00:57:35.063998  156119 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 00:57:35.064003  156119 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0224 00:57:35.064009  156119 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0224 00:57:35.064016  156119 command_runner.go:130] > Change: 2023-02-24 00:41:20.329534418 +0000
	I0224 00:57:35.064020  156119 command_runner.go:130] >  Birth: -
	I0224 00:57:35.064058  156119 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0224 00:57:35.064070  156119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0224 00:57:35.075913  156119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0224 00:57:35.221111  156119 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0224 00:57:35.223962  156119 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0224 00:57:35.226322  156119 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0224 00:57:35.236323  156119 command_runner.go:130] > daemonset.apps/kindnet configured
	I0224 00:57:35.239986  156119 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:57:35.240233  156119 kapi.go:59] client config for multinode-461512: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 00:57:35.240549  156119 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 00:57:35.240561  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.240569  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.240579  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.242080  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.242101  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.242111  156119 round_trippers.go:580]     Audit-Id: 22e449c9-a66b-4718-9176-731a0bfb42db
	I0224 00:57:35.242127  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.242140  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.242162  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.242175  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.242182  156119 round_trippers.go:580]     Content-Length: 291
	I0224 00:57:35.242193  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.242224  156119 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ccf3ec87-77f6-42ea-8caa-6941529dafd4","resourceVersion":"437","creationTimestamp":"2023-02-24T00:56:50Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0224 00:57:35.242315  156119 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-461512" context rescaled to 1 replicas
	I0224 00:57:35.242352  156119 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0224 00:57:35.244595  156119 out.go:177] * Verifying Kubernetes components...
	I0224 00:57:35.245968  156119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 00:57:35.255278  156119 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:57:35.255519  156119 kapi.go:59] client config for multinode-461512: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/profiles/multinode-461512/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 00:57:35.255773  156119 node_ready.go:35] waiting up to 6m0s for node "multinode-461512-m02" to be "Ready" ...
	I0224 00:57:35.255832  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512-m02
	I0224 00:57:35.255843  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.255854  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.255867  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.257401  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.257421  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.257438  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.257447  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.257464  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.257475  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.257483  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.257494  156119 round_trippers.go:580]     Audit-Id: 578e6a5a-cddd-4783-a035-101bc94b08b4
	I0224 00:57:35.257596  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512-m02","uid":"232a61b1-45e8-4ecf-9a67-09c1f1394e3f","resourceVersion":"482","creationTimestamp":"2023-02-24T00:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0224 00:57:35.257911  156119 node_ready.go:49] node "multinode-461512-m02" has status "Ready":"True"
	I0224 00:57:35.257924  156119 node_ready.go:38] duration metric: took 2.135663ms waiting for node "multinode-461512-m02" to be "Ready" ...
	I0224 00:57:35.257933  156119 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 00:57:35.257995  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0224 00:57:35.258004  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.258011  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.258024  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.260522  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:35.260538  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.260545  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.260553  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.260562  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.260574  156119 round_trippers.go:580]     Audit-Id: b38f0cfd-9d7c-4c7c-a390-3a049475d308
	I0224 00:57:35.260584  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.260596  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.261095  156119 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"482"},"items":[{"metadata":{"name":"coredns-787d4945fb-r6m7z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8c8eb92c-c99a-4eea-8518-bd2bac5df023","resourceVersion":"433","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65541 chars]
	I0224 00:57:35.263496  156119 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-r6m7z" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.263550  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-r6m7z
	I0224 00:57:35.263557  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.263565  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.263571  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.265078  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.265098  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.265109  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.265122  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.265134  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.265143  156119 round_trippers.go:580]     Audit-Id: 225db0a9-2390-4fe5-bb77-ddcd53227ee8
	I0224 00:57:35.265158  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.265171  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.265289  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-r6m7z","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"8c8eb92c-c99a-4eea-8518-bd2bac5df023","resourceVersion":"433","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"db0d6e29-571b-4f9b-82fd-09f0bc2ef201","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db0d6e29-571b-4f9b-82fd-09f0bc2ef201\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0224 00:57:35.265704  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:35.265717  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.265724  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.265730  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.267226  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.267245  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.267254  156119 round_trippers.go:580]     Audit-Id: 74dae4a2-590a-42e8-9456-1eac83edc16f
	I0224 00:57:35.267263  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.267272  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.267289  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.267302  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.267316  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.267411  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"440","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0224 00:57:35.267697  156119 pod_ready.go:92] pod "coredns-787d4945fb-r6m7z" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:35.267710  156119 pod_ready.go:81] duration metric: took 4.19524ms waiting for pod "coredns-787d4945fb-r6m7z" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.267721  156119 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.267766  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-461512
	I0224 00:57:35.267775  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.267785  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.267796  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.269318  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.269334  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.269344  156119 round_trippers.go:580]     Audit-Id: b4bfb09d-e1e6-4207-95c4-e410b8e5d3e0
	I0224 00:57:35.269353  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.269367  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.269375  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.269384  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.269394  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.269461  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-461512","namespace":"kube-system","uid":"85634add-ee6f-426e-8dce-c5bd503ada85","resourceVersion":"279","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"755375775ca4908a1a35224e40dd8da8","kubernetes.io/config.mirror":"755375775ca4908a1a35224e40dd8da8","kubernetes.io/config.seen":"2023-02-24T00:56:50.894583011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0224 00:57:35.269769  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:35.269781  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.269788  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.269794  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.271183  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.271204  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.271214  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.271225  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.271236  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.271247  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.271260  156119 round_trippers.go:580]     Audit-Id: 9b061dc6-e39d-4590-ac10-1e0e51c3fa00
	I0224 00:57:35.271272  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.271366  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"440","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0224 00:57:35.271648  156119 pod_ready.go:92] pod "etcd-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:35.271658  156119 pod_ready.go:81] duration metric: took 3.930301ms waiting for pod "etcd-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.271669  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.271708  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-461512
	I0224 00:57:35.271715  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.271721  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.271728  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.273022  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.273043  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.273053  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.273060  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.273065  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.273071  156119 round_trippers.go:580]     Audit-Id: 5abc9a7f-5a67-4924-b8e9-104d6635d5c5
	I0224 00:57:35.273079  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.273088  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.273180  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-461512","namespace":"kube-system","uid":"915d077c-7a17-4c95-9199-8146800a171b","resourceVersion":"382","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"4c6cb11c2c301f276f12bb7545f0af61","kubernetes.io/config.mirror":"4c6cb11c2c301f276f12bb7545f0af61","kubernetes.io/config.seen":"2023-02-24T00:56:50.894613111Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0224 00:57:35.273552  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:35.273562  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.273569  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.273575  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.274832  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.274845  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.274853  156119 round_trippers.go:580]     Audit-Id: 294499bf-a8d4-4fdc-b834-b71e69e7fb8a
	I0224 00:57:35.274862  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.274870  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.274883  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.274896  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.274908  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.274986  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"440","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0224 00:57:35.275253  156119 pod_ready.go:92] pod "kube-apiserver-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:35.275264  156119 pod_ready.go:81] duration metric: took 3.589866ms waiting for pod "kube-apiserver-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.275271  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.275306  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-461512
	I0224 00:57:35.275314  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.275320  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.275326  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.276595  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.276613  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.276621  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.276627  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.276633  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.276641  156119 round_trippers.go:580]     Audit-Id: 74edde08-68ae-43a3-b3cb-9a62ed698a3c
	I0224 00:57:35.276649  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.276657  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.276787  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-461512","namespace":"kube-system","uid":"8e426bcd-dab9-430d-b166-f7ab34013208","resourceVersion":"274","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c1d525744bc3189fa4b6ceed33e9b7b6","kubernetes.io/config.mirror":"c1d525744bc3189fa4b6ceed33e9b7b6","kubernetes.io/config.seen":"2023-02-24T00:56:50.894614692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0224 00:57:35.277138  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:35.277150  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.277157  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.277163  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.278432  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.278448  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.278455  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.278464  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.278474  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.278486  156119 round_trippers.go:580]     Audit-Id: 585cdc6c-4ec4-4fdf-8600-4449ed6e569c
	I0224 00:57:35.278509  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.278519  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.278633  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"440","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0224 00:57:35.278890  156119 pod_ready.go:92] pod "kube-controller-manager-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:35.278900  156119 pod_ready.go:81] duration metric: took 3.62409ms waiting for pod "kube-controller-manager-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.278907  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dvmbp" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.456319  156119 request.go:622] Waited for 177.362601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvmbp
	I0224 00:57:35.456376  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dvmbp
	I0224 00:57:35.456380  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.456388  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.456397  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.458241  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.458263  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.458274  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.458288  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.458297  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.458305  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.458319  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.458331  156119 round_trippers.go:580]     Audit-Id: 3c317fc6-efbe-434f-bc24-7aff9effd134
	I0224 00:57:35.458464  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dvmbp","generateName":"kube-proxy-","namespace":"kube-system","uid":"e9e9bac2-7132-4b60-a535-80b6113e0e8d","resourceVersion":"392","creationTimestamp":"2023-02-24T00:57:03Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ac4eac56-21ca-4f1f-a0d6-df82bff382f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ac4eac56-21ca-4f1f-a0d6-df82bff382f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0224 00:57:35.656236  156119 request.go:622] Waited for 197.348674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:35.656300  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:35.656308  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.656320  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.656334  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.658206  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.658225  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.658235  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.658245  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.658254  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.658264  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.658274  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.658286  156119 round_trippers.go:580]     Audit-Id: 9f228e37-8b47-4cd9-b341-29b1cce2bf2f
	I0224 00:57:35.658367  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"440","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0224 00:57:35.658703  156119 pod_ready.go:92] pod "kube-proxy-dvmbp" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:35.658715  156119 pod_ready.go:81] duration metric: took 379.802212ms waiting for pod "kube-proxy-dvmbp" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.658724  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-phwrs" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:35.856047  156119 request.go:622] Waited for 197.270982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-phwrs
	I0224 00:57:35.856114  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-phwrs
	I0224 00:57:35.856123  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:35.856131  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:35.856138  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:35.857867  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:35.857885  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:35.857891  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:35.857897  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:35.857903  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:35.857908  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:35 GMT
	I0224 00:57:35.857914  156119 round_trippers.go:580]     Audit-Id: de9b2af4-a722-42d6-b783-e741fb59335b
	I0224 00:57:35.857919  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:35.858011  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-phwrs","generateName":"kube-proxy-","namespace":"kube-system","uid":"0c1df716-d306-4932-ac62-f5d9ebd74cdb","resourceVersion":"469","creationTimestamp":"2023-02-24T00:57:34Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ac4eac56-21ca-4f1f-a0d6-df82bff382f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ac4eac56-21ca-4f1f-a0d6-df82bff382f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0224 00:57:36.056813  156119 request.go:622] Waited for 198.369913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-461512-m02
	I0224 00:57:36.056882  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512-m02
	I0224 00:57:36.056889  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:36.056897  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:36.056909  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:36.058838  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:36.058870  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:36.058881  156119 round_trippers.go:580]     Audit-Id: a375d4ba-c6ec-436c-a7ea-ad9ec0be8ac2
	I0224 00:57:36.058888  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:36.058894  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:36.058900  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:36.058906  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:36.058914  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:36 GMT
	I0224 00:57:36.059017  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512-m02","uid":"232a61b1-45e8-4ecf-9a67-09c1f1394e3f","resourceVersion":"482","creationTimestamp":"2023-02-24T00:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0224 00:57:36.560091  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-phwrs
	I0224 00:57:36.560165  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:36.560189  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:36.560206  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:36.562586  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:36.562654  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:36.562674  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:36.562692  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:36.562717  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:36 GMT
	I0224 00:57:36.562745  156119 round_trippers.go:580]     Audit-Id: f6beb059-32be-4ed1-8b61-f36615f67007
	I0224 00:57:36.562763  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:36.562778  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:36.562922  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-phwrs","generateName":"kube-proxy-","namespace":"kube-system","uid":"0c1df716-d306-4932-ac62-f5d9ebd74cdb","resourceVersion":"483","creationTimestamp":"2023-02-24T00:57:34Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ac4eac56-21ca-4f1f-a0d6-df82bff382f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ac4eac56-21ca-4f1f-a0d6-df82bff382f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0224 00:57:36.563528  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512-m02
	I0224 00:57:36.563558  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:36.563575  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:36.563607  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:36.565407  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:36.565464  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:36.565483  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:36.565501  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:36.565528  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:36.565549  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:36 GMT
	I0224 00:57:36.565574  156119 round_trippers.go:580]     Audit-Id: b826fba0-67ca-4966-afcd-feb3fe207fd1
	I0224 00:57:36.565594  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:36.565714  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512-m02","uid":"232a61b1-45e8-4ecf-9a67-09c1f1394e3f","resourceVersion":"482","creationTimestamp":"2023-02-24T00:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0224 00:57:37.060449  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-phwrs
	I0224 00:57:37.060472  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:37.060488  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:37.060499  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:37.062860  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:37.062899  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:37.062910  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:37.062924  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:37.062937  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:37.062950  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:37 GMT
	I0224 00:57:37.062961  156119 round_trippers.go:580]     Audit-Id: 42c3a841-711f-479a-8667-78c05be6250e
	I0224 00:57:37.062974  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:37.063119  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-phwrs","generateName":"kube-proxy-","namespace":"kube-system","uid":"0c1df716-d306-4932-ac62-f5d9ebd74cdb","resourceVersion":"491","creationTimestamp":"2023-02-24T00:57:34Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ac4eac56-21ca-4f1f-a0d6-df82bff382f4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ac4eac56-21ca-4f1f-a0d6-df82bff382f4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0224 00:57:37.063599  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512-m02
	I0224 00:57:37.063613  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:37.063624  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:37.063633  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:37.065355  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:37.065376  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:37.065387  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:37.065395  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:37.065407  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:37.065416  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:37.065425  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:37 GMT
	I0224 00:57:37.065436  156119 round_trippers.go:580]     Audit-Id: 85cf5451-24a8-45ed-a421-617c2740162c
	I0224 00:57:37.065566  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512-m02","uid":"232a61b1-45e8-4ecf-9a67-09c1f1394e3f","resourceVersion":"482","creationTimestamp":"2023-02-24T00:57:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:57:34Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4059 chars]
	I0224 00:57:37.065849  156119 pod_ready.go:92] pod "kube-proxy-phwrs" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:37.065870  156119 pod_ready.go:81] duration metric: took 1.40713953s waiting for pod "kube-proxy-phwrs" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:37.065885  156119 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:37.065943  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-461512
	I0224 00:57:37.065951  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:37.065960  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:37.065973  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:37.067725  156119 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 00:57:37.067753  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:37.067763  156119 round_trippers.go:580]     Audit-Id: 567cc883-333a-49c3-b68e-f253a42841d7
	I0224 00:57:37.067771  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:37.067782  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:37.067791  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:37.067800  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:37.067814  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:37 GMT
	I0224 00:57:37.067906  156119 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-461512","namespace":"kube-system","uid":"64f3ef30-ed87-42cc-b0e2-cd3c7c922383","resourceVersion":"280","creationTimestamp":"2023-02-24T00:56:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6d86c9f2cb44969723080e3b260936ff","kubernetes.io/config.mirror":"6d86c9f2cb44969723080e3b260936ff","kubernetes.io/config.seen":"2023-02-24T00:56:50.894615981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T00:56:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0224 00:57:37.256644  156119 request.go:622] Waited for 188.369517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:37.256726  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-461512
	I0224 00:57:37.256741  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:37.256757  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:37.256771  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:37.259181  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:37.259205  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:37.259216  156119 round_trippers.go:580]     Audit-Id: b1afc5d8-eb41-4761-9e50-95912e19243c
	I0224 00:57:37.259224  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:37.259237  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:37.259247  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:37.259261  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:37.259274  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:37 GMT
	I0224 00:57:37.259390  156119 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"440","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T00:56:48Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0224 00:57:37.259695  156119 pod_ready.go:92] pod "kube-scheduler-multinode-461512" in "kube-system" namespace has status "Ready":"True"
	I0224 00:57:37.259708  156119 pod_ready.go:81] duration metric: took 193.811569ms waiting for pod "kube-scheduler-multinode-461512" in "kube-system" namespace to be "Ready" ...
	I0224 00:57:37.259721  156119 pod_ready.go:38] duration metric: took 2.001773946s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 00:57:37.259745  156119 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 00:57:37.259793  156119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 00:57:37.271419  156119 system_svc.go:56] duration metric: took 11.666861ms WaitForService to wait for kubelet.
	I0224 00:57:37.271443  156119 kubeadm.go:578] duration metric: took 2.029060625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0224 00:57:37.271459  156119 node_conditions.go:102] verifying NodePressure condition ...
	I0224 00:57:37.456890  156119 request.go:622] Waited for 185.359286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0224 00:57:37.456974  156119 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0224 00:57:37.456986  156119 round_trippers.go:469] Request Headers:
	I0224 00:57:37.457003  156119 round_trippers.go:473]     Accept: application/json, */*
	I0224 00:57:37.457018  156119 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 00:57:37.459450  156119 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 00:57:37.459477  156119 round_trippers.go:577] Response Headers:
	I0224 00:57:37.459488  156119 round_trippers.go:580]     Date: Fri, 24 Feb 2023 00:57:37 GMT
	I0224 00:57:37.459499  156119 round_trippers.go:580]     Audit-Id: 926ea2fc-9682-4288-abe8-366a6e931c81
	I0224 00:57:37.459511  156119 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 00:57:37.459525  156119 round_trippers.go:580]     Content-Type: application/json
	I0224 00:57:37.459535  156119 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: aa4a68e6-f738-44d6-b460-48fdb3b6ac66
	I0224 00:57:37.459553  156119 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 08f59aab-2fc6-4ec5-82d4-363548af041b
	I0224 00:57:37.459762  156119 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"multinode-461512","uid":"bfc41c67-f273-4daf-8c1d-d87836ce009e","resourceVersion":"440","creationTimestamp":"2023-02-24T00:56:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-461512","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-461512","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T00_56_51_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10265 chars]
	I0224 00:57:37.460364  156119 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0224 00:57:37.460382  156119 node_conditions.go:123] node cpu capacity is 8
	I0224 00:57:37.460391  156119 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0224 00:57:37.460395  156119 node_conditions.go:123] node cpu capacity is 8
	I0224 00:57:37.460401  156119 node_conditions.go:105] duration metric: took 188.938984ms to run NodePressure ...
	I0224 00:57:37.460412  156119 start.go:228] waiting for startup goroutines ...
	I0224 00:57:37.460446  156119 start.go:242] writing updated cluster config ...
	I0224 00:57:37.460894  156119 ssh_runner.go:195] Run: rm -f paused
	I0224 00:57:37.523560  156119 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0224 00:57:37.526010  156119 out.go:177] * Done! kubectl is now configured to use "multinode-461512" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-24 00:56:34 UTC, end at Fri 2023-02-24 00:57:45 UTC. --
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642123853Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642149728Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642159802Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642197277Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642232110Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642267960Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642301399Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642346191Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642374405Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642597931Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.642636495Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.643080766Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.653654103Z" level=info msg="Loading containers: start."
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.726308188Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.757602777Z" level=info msg="Loading containers: done."
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.766013325Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.766084796Z" level=info msg="Daemon has completed initialization"
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.778254487Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 24 00:56:37 multinode-461512 systemd[1]: Started Docker Application Container Engine.
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.784890254Z" level=info msg="API listen on [::]:2376"
	Feb 24 00:56:37 multinode-461512 dockerd[942]: time="2023-02-24T00:56:37.788986365Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 24 00:57:18 multinode-461512 dockerd[942]: time="2023-02-24T00:57:18.849389166Z" level=info msg="ignoring event" container=18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 00:57:18 multinode-461512 dockerd[942]: time="2023-02-24T00:57:18.909337692Z" level=info msg="ignoring event" container=d20fb8d35594351811e98e88cc7bbbc92fe03e5e7dade38f76fada0dc3532673 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 00:57:18 multinode-461512 dockerd[942]: time="2023-02-24T00:57:18.975380266Z" level=info msg="ignoring event" container=e42bbd739d7352d417880430dda0aa46923d501cf050bf2f8cba81cd285a8c95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 00:57:19 multinode-461512 dockerd[942]: time="2023-02-24T00:57:19.060141729Z" level=info msg="ignoring event" container=6a7397548127f04391e2ea61c9147e8b7c0c83c2e388ce94493c4924a2c0a5af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	7b87c544078ac       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   6 seconds ago        Running             busybox                   0                   b922299f9d89e
	20fba57c87c13       5185b96f0becf                                                                                         26 seconds ago       Running             coredns                   1                   781b48ae8dfaa
	6eb35688e880e       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              39 seconds ago       Running             kindnet-cni               0                   dd0a98d9fc833
	7b6c5ea21c78d       6e38f40d628db                                                                                         39 seconds ago       Running             storage-provisioner       0                   4f736381c5b2d
	e42bbd739d735       5185b96f0becf                                                                                         40 seconds ago       Exited              coredns                   0                   6a7397548127f
	14dbbf3c014be       46a6bb3c77ce0                                                                                         41 seconds ago       Running             kube-proxy                0                   3ef84eaf12535
	b6625d6f60721       e9c08e11b07f6                                                                                         About a minute ago   Running             kube-controller-manager   0                   0358ca89ade14
	21a2538a45b03       fce326961ae2d                                                                                         About a minute ago   Running             etcd                      0                   679fa69b2a76c
	66406a6af762d       655493523f607                                                                                         About a minute ago   Running             kube-scheduler            0                   63f9f3e248e49
	7bfce1d4138f9       deb04688c4a35                                                                                         About a minute ago   Running             kube-apiserver            0                   53b2492db94e2
	
	* 
	* ==> coredns [20fba57c87c1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:35048 - 6704 "HINFO IN 4938656220510300332.7239367872624460590. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007526579s
	[INFO] 10.244.0.3:43092 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198848s
	[INFO] 10.244.0.3:42969 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.011827633s
	[INFO] 10.244.0.3:37111 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.010840995s
	[INFO] 10.244.0.3:49101 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.008721735s
	[INFO] 10.244.0.3:34707 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139228s
	[INFO] 10.244.0.3:51138 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006605112s
	[INFO] 10.244.0.3:33076 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160291s
	[INFO] 10.244.0.3:53026 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174372s
	[INFO] 10.244.0.3:55111 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008365317s
	[INFO] 10.244.0.3:35024 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101642s
	[INFO] 10.244.0.3:39296 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125824s
	[INFO] 10.244.0.3:48645 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110462s
	[INFO] 10.244.0.3:57841 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130253s
	[INFO] 10.244.0.3:34265 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094239s
	[INFO] 10.244.0.3:40254 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010202s
	[INFO] 10.244.0.3:56855 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101307s
	[INFO] 10.244.0.3:52954 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158307s
	[INFO] 10.244.0.3:48106 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00013201s
	[INFO] 10.244.0.3:45916 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114819s
	[INFO] 10.244.0.3:46712 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107164s
	
	* 
	* ==> coredns [e42bbd739d73] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 334083178742072081.1122851254239722435. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 334083178742072081.1122851254239722435. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-461512
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-461512
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c13299ce0b45f38f7f45d3bc31124c3ea59c0510
	                    minikube.k8s.io/name=multinode-461512
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_24T00_56_51_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 00:56:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-461512
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 00:57:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 00:57:21 +0000   Fri, 24 Feb 2023 00:56:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 00:57:21 +0000   Fri, 24 Feb 2023 00:56:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 00:57:21 +0000   Fri, 24 Feb 2023 00:56:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 00:57:21 +0000   Fri, 24 Feb 2023 00:56:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-461512
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                9ba42aef-3183-4f44-952b-05c49f22ad59
	  Boot ID:                    fd195a10-b2a0-490a-9b98-4841e110d2e2
	  Kernel Version:             5.15.0-1029-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-tj597                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-787d4945fb-r6m7z                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     43s
	  kube-system                 etcd-multinode-461512                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         55s
	  kube-system                 kindnet-5p4bl                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      43s
	  kube-system                 kube-apiserver-multinode-461512             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 kube-controller-manager-multinode-461512    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 kube-proxy-dvmbp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kube-scheduler-multinode-461512             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  NodeHasSufficientMemory  64s (x4 over 65s)  kubelet          Node multinode-461512 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x4 over 65s)  kubelet          Node multinode-461512 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x4 over 65s)  kubelet          Node multinode-461512 status is now: NodeHasSufficientPID
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s                kubelet          Node multinode-461512 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s                kubelet          Node multinode-461512 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s                kubelet          Node multinode-461512 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             55s                kubelet          Node multinode-461512 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                55s                kubelet          Node multinode-461512 status is now: NodeReady
	  Normal  RegisteredNode           44s                node-controller  Node multinode-461512 event: Registered Node multinode-461512 in Controller
	
	
	Name:               multinode-461512-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-461512-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 00:57:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-461512-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 00:57:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 00:57:34 +0000   Fri, 24 Feb 2023 00:57:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 00:57:34 +0000   Fri, 24 Feb 2023 00:57:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 00:57:34 +0000   Fri, 24 Feb 2023 00:57:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 00:57:34 +0000   Fri, 24 Feb 2023 00:57:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-461512-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                8d7d4998-5372-4a35-b974-d4f494ff6737
	  Boot ID:                    fd195a10-b2a0-490a-9b98-4841e110d2e2
	  Kernel Version:             5.15.0-1029-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-5jg4x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-6xvgj               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12s
	  kube-system                 kube-proxy-phwrs            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9s                 kube-proxy       
	  Normal  Starting                 13s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)  kubelet          Node multinode-461512-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)  kubelet          Node multinode-461512-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)  kubelet          Node multinode-461512-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                12s                kubelet          Node multinode-461512-m02 status is now: NodeReady
	  Normal  RegisteredNode           9s                 node-controller  Node multinode-461512-m02 event: Registered Node multinode-461512-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.008747] FS-Cache: O-key=[8] '86a00f0200000000'
	[  +0.006294] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007952] FS-Cache: N-cookie d=00000000a2895d09{9p.inode} n=00000000b5c2b24e
	[  +0.007360] FS-Cache: N-key=[8] '86a00f0200000000'
	[  +3.026314] FS-Cache: Duplicate cookie detected
	[  +0.004684] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006737] FS-Cache: O-cookie d=00000000a2895d09{9p.inode} n=000000001405c4ca
	[  +0.007347] FS-Cache: O-key=[8] '85a00f0200000000'
	[  +0.004931] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006672] FS-Cache: N-cookie d=00000000a2895d09{9p.inode} n=00000000dd6525a0
	[  +0.008733] FS-Cache: N-key=[8] '85a00f0200000000'
	[  +0.476759] FS-Cache: Duplicate cookie detected
	[  +0.004695] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006744] FS-Cache: O-cookie d=00000000a2895d09{9p.inode} n=0000000071597208
	[  +0.007366] FS-Cache: O-key=[8] '8da00f0200000000'
	[  +0.004941] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006605] FS-Cache: N-cookie d=00000000a2895d09{9p.inode} n=0000000018960a10
	[  +0.007376] FS-Cache: N-key=[8] '8da00f0200000000'
	[  +7.278389] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a2 93 1b 91 86 10 08 06
	[Feb24 00:49] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Feb24 00:52] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 66 2b c2 36 52 08 06
	[Feb24 00:55] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be 5f 99 5d 1e ae 08 06
	
	* 
	* ==> etcd [21a2538a45b0] <==
	* {"level":"info","ts":"2023-02-24T00:56:45.651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-02-24T00:56:45.652Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-02-24T00:56:45.653Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-24T00:56:45.653Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-24T00:56:45.653Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-24T00:56:45.653Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-24T00:56:45.653Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-24T00:56:46.278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-02-24T00:56:46.278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-02-24T00:56:46.278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-02-24T00:56:46.278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-02-24T00:56:46.278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-24T00:56:46.278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-02-24T00:56:46.278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-24T00:56:46.279Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-461512 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-24T00:56:46.279Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T00:56:46.279Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T00:56:46.279Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T00:56:46.279Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T00:56:46.280Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T00:56:46.280Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T00:56:46.280Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-24T00:56:46.280Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-02-24T00:56:46.281Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-24T00:56:46.281Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:57:46 up 40 min,  0 users,  load average: 2.36, 1.91, 1.32
	Linux multinode-461512 5.15.0-1029-gcp #36~20.04.1-Ubuntu SMP Tue Jan 24 16:54:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [6eb35688e880] <==
	* I0224 00:57:06.949029       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0224 00:57:06.949064       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0224 00:57:06.949165       1 main.go:116] setting mtu 1500 for CNI 
	I0224 00:57:06.949176       1 main.go:146] kindnetd IP family: "ipv4"
	I0224 00:57:06.949196       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0224 00:57:07.251040       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 00:57:07.251067       1 main.go:227] handling current node
	I0224 00:57:17.361449       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 00:57:17.361484       1 main.go:227] handling current node
	I0224 00:57:27.373478       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 00:57:27.373509       1 main.go:227] handling current node
	I0224 00:57:37.385885       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 00:57:37.385915       1 main.go:227] handling current node
	I0224 00:57:37.385927       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0224 00:57:37.385935       1 main.go:250] Node multinode-461512-m02 has CIDR [10.244.1.0/24] 
	I0224 00:57:37.386142       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [7bfce1d4138f] <==
	* I0224 00:56:48.023240       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0224 00:56:48.023264       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0224 00:56:48.023287       1 cache.go:39] Caches are synced for autoregister controller
	I0224 00:56:48.023294       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0224 00:56:48.023295       1 shared_informer.go:280] Caches are synced for configmaps
	I0224 00:56:48.023329       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0224 00:56:48.023466       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0224 00:56:48.025934       1 controller.go:615] quota admission added evaluator for: namespaces
	I0224 00:56:48.098212       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0224 00:56:48.717677       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0224 00:56:48.927009       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0224 00:56:48.930593       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0224 00:56:48.930610       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0224 00:56:49.319477       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 00:56:49.348154       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0224 00:56:49.463666       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0224 00:56:49.470272       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0224 00:56:49.471100       1 controller.go:615] quota admission added evaluator for: endpoints
	I0224 00:56:49.474535       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0224 00:56:49.962979       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0224 00:56:50.828467       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0224 00:56:50.837589       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0224 00:56:50.845708       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0224 00:57:03.268863       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0224 00:57:03.618091       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [b6625d6f6072] <==
	* I0224 00:57:02.895423       1 shared_informer.go:280] Caches are synced for service account
	I0224 00:57:02.923859       1 shared_informer.go:280] Caches are synced for namespace
	I0224 00:57:02.969000       1 shared_informer.go:280] Caches are synced for disruption
	I0224 00:57:02.977292       1 shared_informer.go:280] Caches are synced for resource quota
	I0224 00:57:03.030504       1 shared_informer.go:280] Caches are synced for resource quota
	I0224 00:57:03.272456       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 2"
	I0224 00:57:03.343751       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 00:57:03.415059       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 00:57:03.415080       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0224 00:57:03.627402       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dvmbp"
	I0224 00:57:03.627438       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5p4bl"
	I0224 00:57:03.821160       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-9ws7r"
	I0224 00:57:03.827143       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-r6m7z"
	I0224 00:57:04.053158       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0224 00:57:04.058770       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-9ws7r"
	W0224 00:57:34.146710       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-461512-m02" does not exist
	I0224 00:57:34.153002       1 range_allocator.go:372] Set node multinode-461512-m02 PodCIDR to [10.244.1.0/24]
	I0224 00:57:34.156431       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-phwrs"
	I0224 00:57:34.158467       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6xvgj"
	W0224 00:57:34.861795       1 topologycache.go:232] Can't get CPU or zone information for multinode-461512-m02 node
	W0224 00:57:37.820186       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-461512-m02. Assuming now as a timestamp.
	I0224 00:57:37.820309       1 event.go:294] "Event occurred" object="multinode-461512-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-461512-m02 event: Registered Node multinode-461512-m02 in Controller"
	I0224 00:57:38.577084       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0224 00:57:38.584932       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-5jg4x"
	I0224 00:57:38.590210       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-tj597"
	
	* 
	* ==> kube-proxy [14dbbf3c014b] <==
	* I0224 00:57:04.575725       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0224 00:57:04.575800       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0224 00:57:04.575825       1 server_others.go:535] "Using iptables proxy"
	I0224 00:57:04.671989       1 server_others.go:176] "Using iptables Proxier"
	I0224 00:57:04.672043       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0224 00:57:04.672059       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0224 00:57:04.672084       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0224 00:57:04.672110       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0224 00:57:04.672443       1 server.go:655] "Version info" version="v1.26.1"
	I0224 00:57:04.672456       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 00:57:04.673273       1 config.go:444] "Starting node config controller"
	I0224 00:57:04.673283       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0224 00:57:04.673608       1 config.go:317] "Starting service config controller"
	I0224 00:57:04.673614       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0224 00:57:04.673635       1 config.go:226] "Starting endpoint slice config controller"
	I0224 00:57:04.673639       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0224 00:57:04.773689       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0224 00:57:04.773730       1 shared_informer.go:280] Caches are synced for node config
	I0224 00:57:04.773692       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [66406a6af762] <==
	* W0224 00:56:47.972975       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0224 00:56:47.972992       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0224 00:56:47.972998       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0224 00:56:47.973003       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0224 00:56:47.972988       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0224 00:56:47.973010       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0224 00:56:47.973019       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0224 00:56:47.973021       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0224 00:56:48.928005       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0224 00:56:48.928032       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0224 00:56:48.987666       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0224 00:56:48.987702       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0224 00:56:49.070184       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0224 00:56:49.070214       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0224 00:56:49.082957       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0224 00:56:49.082998       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0224 00:56:49.102941       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0224 00:56:49.102971       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0224 00:56:49.148793       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0224 00:56:49.148818       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0224 00:56:49.183926       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0224 00:56:49.183955       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0224 00:56:49.256012       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0224 00:56:49.256038       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0224 00:56:52.069555       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-24 00:56:34 UTC, end at Fri 2023-02-24 00:57:46 UTC. --
	Feb 24 00:57:06 multinode-461512 kubelet[2303]: I0224 00:57:06.879762    2303 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-9ws7r" podStartSLOduration=3.879719521 pod.CreationTimestamp="2023-02-24 00:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 00:57:06.879187419 +0000 UTC m=+16.070938576" watchObservedRunningTime="2023-02-24 00:57:06.879719521 +0000 UTC m=+16.071470667"
	Feb 24 00:57:07 multinode-461512 kubelet[2303]: I0224 00:57:07.240889    2303 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-r6m7z" podStartSLOduration=4.24084762 pod.CreationTimestamp="2023-02-24 00:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 00:57:07.240515608 +0000 UTC m=+16.432266753" watchObservedRunningTime="2023-02-24 00:57:07.24084762 +0000 UTC m=+16.432598763"
	Feb 24 00:57:07 multinode-461512 kubelet[2303]: I0224 00:57:07.639420    2303 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.6393830339999997 pod.CreationTimestamp="2023-02-24 00:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 00:57:07.639016106 +0000 UTC m=+16.830767454" watchObservedRunningTime="2023-02-24 00:57:07.639383034 +0000 UTC m=+16.831134180"
	Feb 24 00:57:08 multinode-461512 kubelet[2303]: I0224 00:57:08.041748    2303 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-5p4bl" podStartSLOduration=-9.223372031813065e+09 pod.CreationTimestamp="2023-02-24 00:57:03 +0000 UTC" firstStartedPulling="2023-02-24 00:57:04.477212916 +0000 UTC m=+13.668964045" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 00:57:08.041458959 +0000 UTC m=+17.233210103" watchObservedRunningTime="2023-02-24 00:57:08.041710347 +0000 UTC m=+17.233461491"
	Feb 24 00:57:11 multinode-461512 kubelet[2303]: I0224 00:57:11.649444    2303 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 24 00:57:11 multinode-461512 kubelet[2303]: I0224 00:57:11.650239    2303 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 24 00:57:18 multinode-461512 kubelet[2303]: I0224 00:57:18.997269    2303 scope.go:115] "RemoveContainer" containerID="18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9"
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.010931    2303 scope.go:115] "RemoveContainer" containerID="18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9"
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: E0224 00:57:19.011611    2303 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9" containerID="18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9"
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.011659    2303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9} err="failed to get container status \"18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9\": rpc error: code = Unknown desc = Error: No such container: 18e147ce1fbb3c2c6bf3083407574cb78eed36a4afc983075f5d47ca5995d8e9"
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.073868    2303 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d-config-volume\") pod \"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d\" (UID: \"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d\") "
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.073918    2303 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x48v5\" (UniqueName: \"kubernetes.io/projected/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d-kube-api-access-x48v5\") pod \"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d\" (UID: \"4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d\") "
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: W0224 00:57:19.074158    2303 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.074360    2303 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d-config-volume" (OuterVolumeSpecName: "config-volume") pod "4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d" (UID: "4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.075664    2303 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d-kube-api-access-x48v5" (OuterVolumeSpecName: "kube-api-access-x48v5") pod "4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d" (UID: "4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d"). InnerVolumeSpecName "kube-api-access-x48v5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.175090    2303 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-x48v5\" (UniqueName: \"kubernetes.io/projected/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d-kube-api-access-x48v5\") on node \"multinode-461512\" DevicePath \"\""
	Feb 24 00:57:19 multinode-461512 kubelet[2303]: I0224 00:57:19.175121    2303 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d-config-volume\") on node \"multinode-461512\" DevicePath \"\""
	Feb 24 00:57:20 multinode-461512 kubelet[2303]: I0224 00:57:20.014110    2303 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a7397548127f04391e2ea61c9147e8b7c0c83c2e388ce94493c4924a2c0a5af"
	Feb 24 00:57:20 multinode-461512 kubelet[2303]: I0224 00:57:20.983453    2303 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d path="/var/lib/kubelet/pods/4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d/volumes"
	Feb 24 00:57:38 multinode-461512 kubelet[2303]: I0224 00:57:38.594449    2303 topology_manager.go:210] "Topology Admit Handler"
	Feb 24 00:57:38 multinode-461512 kubelet[2303]: E0224 00:57:38.594529    2303 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d" containerName="coredns"
	Feb 24 00:57:38 multinode-461512 kubelet[2303]: I0224 00:57:38.594566    2303 memory_manager.go:346] "RemoveStaleState removing state" podUID="4ebb0558-8bef-4c1e-b4f0-cf8ed8532f2d" containerName="coredns"
	Feb 24 00:57:38 multinode-461512 kubelet[2303]: I0224 00:57:38.785698    2303 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfh7n\" (UniqueName: \"kubernetes.io/projected/0da6e203-810a-4320-8612-085e93ef297c-kube-api-access-wfh7n\") pod \"busybox-6b86dd6d48-tj597\" (UID: \"0da6e203-810a-4320-8612-085e93ef297c\") " pod="default/busybox-6b86dd6d48-tj597"
	Feb 24 00:57:39 multinode-461512 kubelet[2303]: I0224 00:57:39.131438    2303 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b922299f9d89e2207ff7631d4da8d198dc1d0e29717d959b857522e26d7636ce"
	Feb 24 00:57:40 multinode-461512 kubelet[2303]: I0224 00:57:40.153699    2303 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-tj597" podStartSLOduration=-9.22337203470111e+09 pod.CreationTimestamp="2023-02-24 00:57:38 +0000 UTC" firstStartedPulling="2023-02-24 00:57:39.150002642 +0000 UTC m=+48.341753799" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 00:57:40.153274036 +0000 UTC m=+49.345025183" watchObservedRunningTime="2023-02-24 00:57:40.153665196 +0000 UTC m=+49.345416340"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-461512 -n multinode-461512
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-461512 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.23s)

                                                
                                    

Test pass (287/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.4
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.26.1/json-events 7.14
11 TestDownloadOnly/v1.26.1/preload-exists 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.6
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.35
18 TestDownloadOnlyKic 1.59
19 TestBinaryMirror 1.12
20 TestOffline 64
22 TestAddons/Setup 101.11
24 TestAddons/parallel/Registry 15.03
25 TestAddons/parallel/Ingress 20.77
26 TestAddons/parallel/MetricsServer 5.74
27 TestAddons/parallel/HelmTiller 14.91
29 TestAddons/parallel/CSI 63.11
30 TestAddons/parallel/Headlamp 10.21
31 TestAddons/parallel/CloudSpanner 5.43
34 TestAddons/serial/GCPAuth/Namespaces 0.13
35 TestAddons/StoppedEnableDisable 11.18
36 TestCertOptions 31.7
37 TestCertExpiration 245.62
38 TestDockerFlags 36.57
39 TestForceSystemdFlag 35.66
40 TestForceSystemdEnv 33.44
41 TestKVMDriverInstallOrUpdate 5.66
45 TestErrorSpam/setup 27.39
46 TestErrorSpam/start 1.09
47 TestErrorSpam/status 1.45
48 TestErrorSpam/pause 1.61
49 TestErrorSpam/unpause 1.6
50 TestErrorSpam/stop 1.72
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 44.69
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 69.67
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.07
61 TestFunctional/serial/CacheCmd/cache/add_remote 2.71
62 TestFunctional/serial/CacheCmd/cache/add_local 0.87
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
64 TestFunctional/serial/CacheCmd/cache/list 0.05
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.46
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.05
67 TestFunctional/serial/CacheCmd/cache/delete 0.09
68 TestFunctional/serial/MinikubeKubectlCmd 0.1
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
70 TestFunctional/serial/ExtraConfig 41.78
71 TestFunctional/serial/ComponentHealth 0.06
72 TestFunctional/serial/LogsCmd 1.09
73 TestFunctional/serial/LogsFileCmd 1.12
75 TestFunctional/parallel/ConfigCmd 0.37
76 TestFunctional/parallel/DashboardCmd 8.6
77 TestFunctional/parallel/DryRun 0.66
78 TestFunctional/parallel/InternationalLanguage 0.32
79 TestFunctional/parallel/StatusCmd 1.68
83 TestFunctional/parallel/ServiceCmdConnect 8.04
84 TestFunctional/parallel/AddonsCmd 0.18
85 TestFunctional/parallel/PersistentVolumeClaim 30.51
87 TestFunctional/parallel/SSHCmd 1.13
88 TestFunctional/parallel/CpCmd 2.35
89 TestFunctional/parallel/MySQL 19.79
90 TestFunctional/parallel/FileSync 0.69
91 TestFunctional/parallel/CertSync 3.44
95 TestFunctional/parallel/NodeLabels 0.06
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
99 TestFunctional/parallel/License 0.17
100 TestFunctional/parallel/DockerEnv/bash 2.15
101 TestFunctional/parallel/UpdateContextCmd/no_changes 0.31
102 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.31
103 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.31
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.23
108 TestFunctional/parallel/ServiceCmd/ServiceJSONOutput 0.81
109 TestFunctional/parallel/Version/short 0.07
110 TestFunctional/parallel/Version/components 0.83
111 TestFunctional/parallel/ImageCommands/ImageListShort 0.38
112 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
113 TestFunctional/parallel/ImageCommands/ImageListJson 0.4
114 TestFunctional/parallel/ImageCommands/ImageListYaml 0.38
115 TestFunctional/parallel/ImageCommands/ImageBuild 3.8
116 TestFunctional/parallel/ImageCommands/Setup 1.03
117 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.14
118 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.82
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.64
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.6
127 TestFunctional/parallel/ProfileCmd/profile_list 0.59
128 TestFunctional/parallel/MountCmd/any-port 9.04
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.78
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.99
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.21
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.6
134 TestFunctional/parallel/MountCmd/specific-port 2.89
135 TestFunctional/delete_addon-resizer_images 0.16
136 TestFunctional/delete_my-image_image 0.06
137 TestFunctional/delete_minikube_cached_images 0.06
141 TestImageBuild/serial/NormalBuild 0.95
142 TestImageBuild/serial/BuildWithBuildArg 1.02
143 TestImageBuild/serial/BuildWithDockerIgnore 0.45
144 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.37
147 TestIngressAddonLegacy/StartLegacyK8sCluster 97.43
149 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.16
150 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.43
151 TestIngressAddonLegacy/serial/ValidateIngressAddons 37.16
154 TestJSONOutput/start/Command 44.52
155 TestJSONOutput/start/Audit 0
157 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/pause/Command 0.65
161 TestJSONOutput/pause/Audit 0
163 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/unpause/Command 0.6
167 TestJSONOutput/unpause/Audit 0
169 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/stop/Command 5.97
173 TestJSONOutput/stop/Audit 0
175 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
177 TestErrorJSONOutput 0.43
179 TestKicCustomNetwork/create_custom_network 27.93
180 TestKicCustomNetwork/use_default_bridge_network 27.44
181 TestKicExistingNetwork 28.74
182 TestKicCustomSubnet 29.82
183 TestKicStaticIP 29.18
184 TestMainNoArgs 0.05
185 TestMinikubeProfile 59.45
188 TestMountStart/serial/StartWithMountFirst 7.39
189 TestMountStart/serial/VerifyMountFirst 0.45
190 TestMountStart/serial/StartWithMountSecond 7.31
191 TestMountStart/serial/VerifyMountSecond 0.44
192 TestMountStart/serial/DeleteFirst 2.08
193 TestMountStart/serial/VerifyMountPostDelete 0.45
194 TestMountStart/serial/Stop 1.39
195 TestMountStart/serial/RestartStopped 7.97
196 TestMountStart/serial/VerifyMountPostStop 0.44
199 TestMultiNode/serial/FreshStart2Nodes 71.53
202 TestMultiNode/serial/AddNode 17.28
203 TestMultiNode/serial/ProfileList 0.46
204 TestMultiNode/serial/CopyFile 16.05
205 TestMultiNode/serial/StopNode 3.04
206 TestMultiNode/serial/StartAfterStop 12.57
207 TestMultiNode/serial/RestartKeepsNodes 96.88
208 TestMultiNode/serial/DeleteNode 6.11
209 TestMultiNode/serial/StopMultiNode 22.05
210 TestMultiNode/serial/RestartMultiNode 54.12
211 TestMultiNode/serial/ValidateNameConflict 30.02
216 TestPreload 121.81
218 TestScheduledStopUnix 102.57
219 TestSkaffold 59.86
221 TestInsufficientStorage 12.92
222 TestRunningBinaryUpgrade 75.61
224 TestKubernetesUpgrade 368.06
225 TestMissingContainerUpgrade 139.33
226 TestStoppedBinaryUpgrade/Setup 0.59
227 TestStoppedBinaryUpgrade/Upgrade 99.85
228 TestStoppedBinaryUpgrade/MinikubeLogs 1.52
230 TestPause/serial/Start 58.1
238 TestPause/serial/SecondStartNoReconfiguration 39.07
240 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
241 TestNoKubernetes/serial/StartWithK8s 25.97
242 TestPause/serial/Pause 0.69
243 TestPause/serial/VerifyStatus 0.58
244 TestPause/serial/Unpause 0.67
245 TestPause/serial/PauseAgain 0.84
246 TestPause/serial/DeletePaused 2.87
247 TestPause/serial/VerifyDeletedResources 1.17
259 TestNoKubernetes/serial/StartWithStopK8s 16.91
260 TestNoKubernetes/serial/Start 6.82
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.51
262 TestNoKubernetes/serial/ProfileList 19.51
264 TestStartStop/group/old-k8s-version/serial/FirstStart 123.72
265 TestNoKubernetes/serial/Stop 1.43
266 TestNoKubernetes/serial/StartNoArgs 7.54
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.52
269 TestStartStop/group/embed-certs/serial/FirstStart 51.17
270 TestStartStop/group/embed-certs/serial/DeployApp 7.35
271 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.74
272 TestStartStop/group/embed-certs/serial/Stop 10.94
273 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.28
274 TestStartStop/group/embed-certs/serial/SecondStart 563.84
276 TestStartStop/group/no-preload/serial/FirstStart 63.36
278 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 47.62
279 TestStartStop/group/old-k8s-version/serial/DeployApp 9.35
280 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.68
281 TestStartStop/group/old-k8s-version/serial/Stop 11.09
282 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
283 TestStartStop/group/old-k8s-version/serial/SecondStart 337.42
284 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
285 TestStartStop/group/no-preload/serial/DeployApp 9.32
286 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.76
287 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.06
288 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.69
289 TestStartStop/group/no-preload/serial/Stop 10.82
290 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
291 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 557.54
292 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
293 TestStartStop/group/no-preload/serial/SecondStart 561.72
294 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
295 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
296 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.55
297 TestStartStop/group/old-k8s-version/serial/Pause 3.97
299 TestStartStop/group/newest-cni/serial/FirstStart 42.3
300 TestStartStop/group/newest-cni/serial/DeployApp 0
301 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.64
302 TestStartStop/group/newest-cni/serial/Stop 5.83
303 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
304 TestStartStop/group/newest-cni/serial/SecondStart 27.94
305 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
306 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
307 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.5
308 TestStartStop/group/newest-cni/serial/Pause 3.62
309 TestNetworkPlugins/group/auto/Start 49.42
310 TestNetworkPlugins/group/auto/KubeletFlags 0.48
311 TestNetworkPlugins/group/auto/NetCatPod 10.24
312 TestNetworkPlugins/group/auto/DNS 0.15
313 TestNetworkPlugins/group/auto/Localhost 0.13
314 TestNetworkPlugins/group/auto/HairPin 0.13
315 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
317 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.54
318 TestStartStop/group/embed-certs/serial/Pause 3.82
319 TestNetworkPlugins/group/kindnet/Start 57.37
320 TestNetworkPlugins/group/calico/Start 76.11
321 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
322 TestNetworkPlugins/group/kindnet/KubeletFlags 0.48
323 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
324 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
325 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
326 TestNetworkPlugins/group/kindnet/DNS 0.16
327 TestNetworkPlugins/group/kindnet/Localhost 0.13
328 TestNetworkPlugins/group/kindnet/HairPin 0.12
329 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
330 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.55
331 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.79
332 TestNetworkPlugins/group/calico/ControllerPod 5.02
333 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
334 TestNetworkPlugins/group/custom-flannel/Start 64.25
335 TestNetworkPlugins/group/calico/KubeletFlags 0.58
336 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.75
337 TestNetworkPlugins/group/calico/NetCatPod 13.34
338 TestStartStop/group/no-preload/serial/Pause 4.69
339 TestNetworkPlugins/group/false/Start 46.97
340 TestNetworkPlugins/group/calico/DNS 0.17
341 TestNetworkPlugins/group/calico/Localhost 0.17
342 TestNetworkPlugins/group/calico/HairPin 0.19
343 TestNetworkPlugins/group/enable-default-cni/Start 85.85
344 TestNetworkPlugins/group/flannel/Start 60.81
345 TestNetworkPlugins/group/false/KubeletFlags 0.68
346 TestNetworkPlugins/group/false/NetCatPod 9.26
347 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.64
348 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.25
349 TestNetworkPlugins/group/false/DNS 0.18
350 TestNetworkPlugins/group/false/Localhost 0.15
351 TestNetworkPlugins/group/false/HairPin 0.15
352 TestNetworkPlugins/group/custom-flannel/DNS 0.18
353 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
354 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
355 TestNetworkPlugins/group/bridge/Start 59.93
356 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.54
357 TestNetworkPlugins/group/kubenet/Start 82.18
358 TestNetworkPlugins/group/flannel/ControllerPod 5.02
359 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.23
360 TestNetworkPlugins/group/flannel/KubeletFlags 0.58
361 TestNetworkPlugins/group/flannel/NetCatPod 10.32
362 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
363 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
364 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
365 TestNetworkPlugins/group/flannel/DNS 0.17
366 TestNetworkPlugins/group/flannel/Localhost 0.14
367 TestNetworkPlugins/group/flannel/HairPin 0.15
368 TestNetworkPlugins/group/bridge/KubeletFlags 0.47
369 TestNetworkPlugins/group/bridge/NetCatPod 9.19
370 TestNetworkPlugins/group/bridge/DNS 0.15
371 TestNetworkPlugins/group/bridge/Localhost 0.14
372 TestNetworkPlugins/group/bridge/HairPin 0.13
373 TestNetworkPlugins/group/kubenet/KubeletFlags 0.51
374 TestNetworkPlugins/group/kubenet/NetCatPod 9.21
375 TestNetworkPlugins/group/kubenet/DNS 0.14
376 TestNetworkPlugins/group/kubenet/Localhost 0.14
377 TestNetworkPlugins/group/kubenet/HairPin 0.12
x
+
TestDownloadOnly/v1.16.0/json-events (8.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-534183 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-534183 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.402822474s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-534183
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-534183: exit status 85 (60.253818ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-534183 | jenkins | v1.29.0 | 24 Feb 23 00:40 UTC |          |
	|         | -p download-only-534183        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 00:40:53
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 00:40:53.212206   10482 out.go:296] Setting OutFile to fd 1 ...
	I0224 00:40:53.212360   10482 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:40:53.212375   10482 out.go:309] Setting ErrFile to fd 2...
	I0224 00:40:53.212388   10482 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:40:53.212738   10482 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3785/.minikube/bin
	W0224 00:40:53.212854   10482 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15909-3785/.minikube/config/config.json: open /home/jenkins/minikube-integration/15909-3785/.minikube/config/config.json: no such file or directory
	I0224 00:40:53.213417   10482 out.go:303] Setting JSON to true
	I0224 00:40:53.214238   10482 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1402,"bootTime":1677197851,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 00:40:53.214290   10482 start.go:135] virtualization: kvm guest
	I0224 00:40:53.216693   10482 out.go:97] [download-only-534183] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 00:40:53.218199   10482 out.go:169] MINIKUBE_LOCATION=15909
	W0224 00:40:53.216783   10482 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball: no such file or directory
	I0224 00:40:53.216805   10482 notify.go:220] Checking for updates...
	I0224 00:40:53.220853   10482 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 00:40:53.222253   10482 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:40:53.223499   10482 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3785/.minikube
	I0224 00:40:53.224947   10482 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0224 00:40:53.227467   10482 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0224 00:40:53.227594   10482 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 00:40:53.294771   10482 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0224 00:40:53.294858   10482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 00:40:53.410949   10482 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-02-24 00:40:53.402961927 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 00:40:53.411027   10482 docker.go:294] overlay module found
	I0224 00:40:53.412841   10482 out.go:97] Using the docker driver based on user configuration
	I0224 00:40:53.412866   10482 start.go:296] selected driver: docker
	I0224 00:40:53.412871   10482 start.go:857] validating driver "docker" against <nil>
	I0224 00:40:53.412942   10482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 00:40:53.524522   10482 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-02-24 00:40:53.516924654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 00:40:53.524632   10482 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0224 00:40:53.525086   10482 start_flags.go:386] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I0224 00:40:53.525227   10482 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0224 00:40:53.527319   10482 out.go:169] Using Docker driver with root privileges
	I0224 00:40:53.528855   10482 cni.go:84] Creating CNI manager for ""
	I0224 00:40:53.528875   10482 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0224 00:40:53.528882   10482 start_flags.go:319] config:
	{Name:download-only-534183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-534183 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 00:40:53.530363   10482 out.go:97] Starting control plane node download-only-534183 in cluster download-only-534183
	I0224 00:40:53.530379   10482 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 00:40:53.531750   10482 out.go:97] Pulling base image ...
	I0224 00:40:53.531777   10482 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0224 00:40:53.531905   10482 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 00:40:53.575597   10482 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0224 00:40:53.575619   10482 cache.go:57] Caching tarball of preloaded images
	I0224 00:40:53.575734   10482 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0224 00:40:53.577646   10482 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0224 00:40:53.577665   10482 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0224 00:40:53.593092   10482 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0224 00:40:53.593198   10482 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory
	I0224 00:40:53.593287   10482 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0224 00:40:53.608157   10482 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0224 00:40:57.702411   10482 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0224 00:40:57.702484   10482 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-534183"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (7.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-534183 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-534183 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.135084533s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (7.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-534183
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-534183: exit status 85 (63.473368ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-534183 | jenkins | v1.29.0 | 24 Feb 23 00:40 UTC |          |
	|         | -p download-only-534183        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-534183 | jenkins | v1.29.0 | 24 Feb 23 00:41 UTC |          |
	|         | -p download-only-534183        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 00:41:01
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 00:41:01.677386   10721 out.go:296] Setting OutFile to fd 1 ...
	I0224 00:41:01.677578   10721 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:41:01.677587   10721 out.go:309] Setting ErrFile to fd 2...
	I0224 00:41:01.677591   10721 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:41:01.677686   10721 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3785/.minikube/bin
	W0224 00:41:01.677781   10721 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15909-3785/.minikube/config/config.json: open /home/jenkins/minikube-integration/15909-3785/.minikube/config/config.json: no such file or directory
	I0224 00:41:01.678169   10721 out.go:303] Setting JSON to true
	I0224 00:41:01.678867   10721 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1411,"bootTime":1677197851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 00:41:01.678937   10721 start.go:135] virtualization: kvm guest
	I0224 00:41:01.681396   10721 out.go:97] [download-only-534183] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 00:41:01.683098   10721 out.go:169] MINIKUBE_LOCATION=15909
	I0224 00:41:01.681520   10721 notify.go:220] Checking for updates...
	I0224 00:41:01.686152   10721 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 00:41:01.687656   10721 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:41:01.689178   10721 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3785/.minikube
	I0224 00:41:01.690636   10721 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0224 00:41:01.693457   10721 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0224 00:41:01.693830   10721 config.go:182] Loaded profile config "download-only-534183": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0224 00:41:01.693883   10721 start.go:765] api.Load failed for download-only-534183: filestore "download-only-534183": Docker machine "download-only-534183" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0224 00:41:01.693925   10721 driver.go:365] Setting default libvirt URI to qemu:///system
	W0224 00:41:01.693952   10721 start.go:765] api.Load failed for download-only-534183: filestore "download-only-534183": Docker machine "download-only-534183" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0224 00:41:01.761153   10721 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0224 00:41:01.761232   10721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 00:41:01.874525   10721 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-02-24 00:41:01.866746707 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 00:41:01.874614   10721 docker.go:294] overlay module found
	I0224 00:41:01.876516   10721 out.go:97] Using the docker driver based on existing profile
	I0224 00:41:01.876533   10721 start.go:296] selected driver: docker
	I0224 00:41:01.876539   10721 start.go:857] validating driver "docker" against &{Name:download-only-534183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-534183 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP:}
	I0224 00:41:01.876647   10721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 00:41:01.984179   10721 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-02-24 00:41:01.976965662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 00:41:01.984752   10721 cni.go:84] Creating CNI manager for ""
	I0224 00:41:01.984773   10721 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 00:41:01.984788   10721 start_flags.go:319] config:
	{Name:download-only-534183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:download-only-534183 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 00:41:01.986715   10721 out.go:97] Starting control plane node download-only-534183 in cluster download-only-534183
	I0224 00:41:01.986741   10721 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 00:41:01.988193   10721 out.go:97] Pulling base image ...
	I0224 00:41:01.988214   10721 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 00:41:01.988297   10721 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 00:41:02.008936   10721 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0224 00:41:02.008952   10721 cache.go:57] Caching tarball of preloaded images
	I0224 00:41:02.009091   10721 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 00:41:02.011089   10721 out.go:97] Downloading Kubernetes v1.26.1 preload ...
	I0224 00:41:02.011108   10721 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0224 00:41:02.041211   10721 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4?checksum=md5:c6cc8ea1da4e19500d6fe35540785ea8 -> /home/jenkins/minikube-integration/15909-3785/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0224 00:41:02.051079   10721 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0224 00:41:02.051184   10721 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory
	I0224 00:41:02.051203   10721 image.go:64] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory, skipping pull
	I0224 00:41:02.051209   10721 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in cache, skipping pull
	I0224 00:41:02.051219   10721 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-534183"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.6s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.60s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-534183
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-848691 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-848691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-848691
--- PASS: TestDownloadOnlyKic (1.59s)

                                                
                                    
x
+
TestBinaryMirror (1.12s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-067163 --alsologtostderr --binary-mirror http://127.0.0.1:38921 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-067163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-067163
--- PASS: TestBinaryMirror (1.12s)

                                                
                                    
x
+
TestOffline (64s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-866888 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-866888 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m0.649722771s)
helpers_test.go:175: Cleaning up "offline-docker-866888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-866888
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-866888: (3.351047878s)
--- PASS: TestOffline (64.00s)

                                                
                                    
x
+
TestAddons/Setup (101.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-905638 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-905638 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m41.10635517s)
--- PASS: TestAddons/Setup (101.11s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 10.004022ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-kxwjc" [242ae584-47e2-48f1-a413-9114e148e716] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007756094s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2f6tf" [088a96cf-62ca-452e-a52a-02c59db172a1] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007399164s
addons_test.go:305: (dbg) Run:  kubectl --context addons-905638 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-905638 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-905638 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.048681423s)
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-905638 ip
2023/02/24 00:43:08 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-905638 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.03s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-905638 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context addons-905638 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (1.239552842s)
addons_test.go:197: (dbg) Run:  kubectl --context addons-905638 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:197: (dbg) Done: kubectl --context addons-905638 replace --force -f testdata/nginx-ingress-v1.yaml: (1.187844951s)
addons_test.go:210: (dbg) Run:  kubectl --context addons-905638 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3313870c-848e-4d0f-9017-a35327fa74cc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3313870c-848e-4d0f-9017-a35327fa74cc] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.017391057s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p addons-905638 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-905638 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-905638 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p addons-905638 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p addons-905638 addons disable ingress-dns --alsologtostderr -v=1: (1.549237095s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p addons-905638 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p addons-905638 addons disable ingress --alsologtostderr -v=1: (7.508807024s)
--- PASS: TestAddons/parallel/Ingress (20.77s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 11.133972ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-ft6cr" [819da1e2-6f46-47cd-84de-f956152970da] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007334089s
addons_test.go:380: (dbg) Run:  kubectl --context addons-905638 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p addons-905638 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.74s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.91s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 1.966127ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-shkxm" [a804b7fc-0913-47ed-bc67-4fc28097c530] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007220909s
addons_test.go:438: (dbg) Run:  kubectl --context addons-905638 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-905638 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.441902169s)
addons_test.go:443: kubectl --context addons-905638 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:438: (dbg) Run:  kubectl --context addons-905638 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-905638 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.366296634s)
addons_test.go:455: (dbg) Run:  out/minikube-linux-amd64 -p addons-905638 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 5.079425ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-905638 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-905638 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e3c4971b-49f2-449e-8cb8-13e7e729aa0a] Pending
helpers_test.go:344: "task-pv-pod" [e3c4971b-49f2-449e-8cb8-13e7e729aa0a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e3c4971b-49f2-449e-8cb8-13e7e729aa0a] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.005783894s
addons_test.go:549: (dbg) Run:  kubectl --context addons-905638 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-905638 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-905638 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-905638 delete pod task-pv-pod
addons_test.go:565: (dbg) Run:  kubectl --context addons-905638 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-905638 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-905638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-905638 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [320cc051-0e27-447f-8a5b-6ec13619b71a] Pending
helpers_test.go:344: "task-pv-pod-restore" [320cc051-0e27-447f-8a5b-6ec13619b71a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [320cc051-0e27-447f-8a5b-6ec13619b71a] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.007633883s
addons_test.go:591: (dbg) Run:  kubectl --context addons-905638 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-905638 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-905638 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-905638 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-905638 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.447960533s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-905638 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (63.11s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-905638 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-905638 --alsologtostderr -v=1: (1.156828451s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-ck9vl" [2d66a190-562b-43f6-aaf1-41ea912ad1f0] Pending
helpers_test.go:344: "headlamp-5759877c79-ck9vl" [2d66a190-562b-43f6-aaf1-41ea912ad1f0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-ck9vl" [2d66a190-562b-43f6-aaf1-41ea912ad1f0] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.053978828s
--- PASS: TestAddons/parallel/Headlamp (10.21s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-ddf7c59b4-d6wjq" [e1b52cec-19cb-4fd1-9047-3f31279c3171] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006945086s
addons_test.go:813: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-905638
--- PASS: TestAddons/parallel/CloudSpanner (5.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-905638 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-905638 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-905638
addons_test.go:147: (dbg) Done: out/minikube-linux-amd64 stop -p addons-905638: (10.949538538s)
addons_test.go:151: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-905638
addons_test.go:155: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-905638
--- PASS: TestAddons/StoppedEnableDisable (11.18s)

                                                
                                    
x
+
TestCertOptions (31.7s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-005321 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-005321 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (28.054858648s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-005321 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-005321 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-005321 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-005321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-005321
E0224 01:11:44.344308   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
E0224 01:11:44.349570   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
E0224 01:11:44.359872   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
E0224 01:11:44.380129   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-005321: (2.664249076s)
--- PASS: TestCertOptions (31.70s)

                                                
                                    
x
+
TestCertExpiration (245.62s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-054120 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-054120 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (29.822380691s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-054120 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-054120 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (32.644877053s)
helpers_test.go:175: Cleaning up "cert-expiration-054120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-054120
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-054120: (3.152039244s)
--- PASS: TestCertExpiration (245.62s)

                                                
                                    
x
+
TestDockerFlags (36.57s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-894118 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-894118 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (32.684109607s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-894118 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-894118 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-894118" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-894118
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-894118: (2.917538238s)
--- PASS: TestDockerFlags (36.57s)

                                                
                                    
x
+
TestForceSystemdFlag (35.66s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-494950 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0224 01:09:17.166654   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-494950 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (30.766009385s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-494950 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-494950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-494950
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-494950: (4.349159803s)
--- PASS: TestForceSystemdFlag (35.66s)

                                                
                                    
x
+
TestForceSystemdEnv (33.44s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-268935 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-268935 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (29.981845047s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-268935 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-268935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-268935
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-268935: (2.882587646s)
--- PASS: TestForceSystemdEnv (33.44s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.66s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.66s)

                                                
                                    
x
+
TestErrorSpam/setup (27.39s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-814937 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-814937 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-814937 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-814937 --driver=docker  --container-runtime=docker: (27.392242484s)
--- PASS: TestErrorSpam/setup (27.39s)

                                                
                                    
x
+
TestErrorSpam/start (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 start --dry-run
--- PASS: TestErrorSpam/start (1.09s)

                                                
                                    
x
+
TestErrorSpam/status (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 status
--- PASS: TestErrorSpam/status (1.45s)

                                                
                                    
x
+
TestErrorSpam/pause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 pause
--- PASS: TestErrorSpam/pause (1.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 unpause
--- PASS: TestErrorSpam/unpause (1.60s)

                                                
                                    
x
+
TestErrorSpam/stop (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 stop: (1.363192232s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814937 --log_dir /tmp/nospam-814937 stop
--- PASS: TestErrorSpam/stop (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1820: local sync path: /home/jenkins/minikube-integration/15909-3785/.minikube/files/etc/test/nested/copy/10470/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (44.69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2199: (dbg) Run:  out/minikube-linux-amd64 start -p functional-304785 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2199: (dbg) Done: out/minikube-linux-amd64 start -p functional-304785 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (44.691353033s)
--- PASS: TestFunctional/serial/StartWithProxy (44.69s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (69.67s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:653: (dbg) Run:  out/minikube-linux-amd64 start -p functional-304785 --alsologtostderr -v=8
functional_test.go:653: (dbg) Done: out/minikube-linux-amd64 start -p functional-304785 --alsologtostderr -v=8: (1m9.673032786s)
functional_test.go:657: soft start took 1m9.67376598s for "functional-304785" cluster.
--- PASS: TestFunctional/serial/SoftStart (69.67s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:675: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:690: (dbg) Run:  kubectl --context functional-304785 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 cache add k8s.gcr.io/pause:3.1
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 cache add k8s.gcr.io/pause:3.3
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1071: (dbg) Run:  docker build -t minikube-local-cache-test:functional-304785 /tmp/TestFunctionalserialCacheCmdcacheadd_local1453536053/001
functional_test.go:1083: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 cache add minikube-local-cache-test:functional-304785
functional_test.go:1088: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 cache delete minikube-local-cache-test:functional-304785
functional_test.go:1077: (dbg) Run:  docker rmi minikube-local-cache-test:functional-304785
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1096: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1118: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304785 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (457.059959ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1152: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 cache reload
functional_test.go:1157: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1166: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1166: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:710: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 kubectl -- --context functional-304785 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: (dbg) Run:  out/kubectl --context functional-304785 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.78s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:751: (dbg) Run:  out/minikube-linux-amd64 start -p functional-304785 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:751: (dbg) Done: out/minikube-linux-amd64 start -p functional-304785 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.774407399s)
functional_test.go:755: restart took 41.774533227s for "functional-304785" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.78s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:804: (dbg) Run:  kubectl --context functional-304785 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:819: etcd phase: Running
functional_test.go:829: etcd status: Ready
functional_test.go:819: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver status: Ready
functional_test.go:819: kube-controller-manager phase: Running
functional_test.go:829: kube-controller-manager status: Ready
functional_test.go:819: kube-scheduler phase: Running
functional_test.go:829: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1230: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 logs
functional_test.go:1230: (dbg) Done: out/minikube-linux-amd64 -p functional-304785 logs: (1.090039585s)
--- PASS: TestFunctional/serial/LogsCmd (1.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1244: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 logs --file /tmp/TestFunctionalserialLogsFileCmd49947660/001/logs.txt
functional_test.go:1244: (dbg) Done: out/minikube-linux-amd64 -p functional-304785 logs --file /tmp/TestFunctionalserialLogsFileCmd49947660/001/logs.txt: (1.115013744s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 config get cpus
E0224 00:47:53.981829   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304785 config get cpus: exit status 14 (72.075801ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 config set cpus 2
E0224 00:47:54.103138   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 config get cpus
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 config get cpus
E0224 00:47:54.263758   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304785 config get cpus: exit status 14 (53.525571ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:899: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-304785 --alsologtostderr -v=1]
functional_test.go:904: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-304785 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 78169: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:968: (dbg) Run:  out/minikube-linux-amd64 start -p functional-304785 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:968: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-304785 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (272.079645ms)

                                                
                                                
-- stdout --
	* [functional-304785] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3785/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3785/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 00:48:18.141545   73105 out.go:296] Setting OutFile to fd 1 ...
	I0224 00:48:18.141642   73105 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:48:18.141650   73105 out.go:309] Setting ErrFile to fd 2...
	I0224 00:48:18.141654   73105 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:48:18.141753   73105 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3785/.minikube/bin
	I0224 00:48:18.142283   73105 out.go:303] Setting JSON to false
	I0224 00:48:18.143512   73105 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1847,"bootTime":1677197851,"procs":676,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 00:48:18.143566   73105 start.go:135] virtualization: kvm guest
	I0224 00:48:18.145930   73105 out.go:177] * [functional-304785] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 00:48:18.147641   73105 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 00:48:18.147622   73105 notify.go:220] Checking for updates...
	I0224 00:48:18.148933   73105 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 00:48:18.150302   73105 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:48:18.151641   73105 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3785/.minikube
	I0224 00:48:18.152978   73105 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 00:48:18.154250   73105 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 00:48:18.155789   73105 config.go:182] Loaded profile config "functional-304785": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 00:48:18.156170   73105 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 00:48:18.232004   73105 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0224 00:48:18.232100   73105 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 00:48:18.353296   73105 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-02-24 00:48:18.344857535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 00:48:18.353386   73105 docker.go:294] overlay module found
	I0224 00:48:18.355230   73105 out.go:177] * Using the docker driver based on existing profile
	I0224 00:48:18.356441   73105 start.go:296] selected driver: docker
	I0224 00:48:18.356453   73105 start.go:857] validating driver "docker" against &{Name:functional-304785 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-304785 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 00:48:18.356549   73105 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 00:48:18.358499   73105 out.go:177] 
	W0224 00:48:18.359741   73105 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0224 00:48:18.361010   73105 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:985: (dbg) Run:  out/minikube-linux-amd64 start -p functional-304785 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 start -p functional-304785 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1014: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-304785 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (318.911466ms)

                                                
                                                
-- stdout --
	* [functional-304785] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3785/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3785/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 00:48:18.797960   73480 out.go:296] Setting OutFile to fd 1 ...
	I0224 00:48:18.798156   73480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:48:18.798165   73480 out.go:309] Setting ErrFile to fd 2...
	I0224 00:48:18.798170   73480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:48:18.798320   73480 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3785/.minikube/bin
	I0224 00:48:18.798915   73480 out.go:303] Setting JSON to false
	I0224 00:48:18.800447   73480 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1848,"bootTime":1677197851,"procs":679,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 00:48:18.800501   73480 start.go:135] virtualization: kvm guest
	I0224 00:48:18.802673   73480 out.go:177] * [functional-304785] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	I0224 00:48:18.804671   73480 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 00:48:18.804637   73480 notify.go:220] Checking for updates...
	I0224 00:48:18.806244   73480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 00:48:18.807730   73480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-3785/kubeconfig
	I0224 00:48:18.809391   73480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3785/.minikube
	I0224 00:48:18.810834   73480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 00:48:18.812293   73480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 00:48:18.813869   73480 config.go:182] Loaded profile config "functional-304785": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 00:48:18.814290   73480 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 00:48:18.905375   73480 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0224 00:48:18.905635   73480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 00:48:19.055087   73480 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2023-02-24 00:48:19.043923512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 00:48:19.055217   73480 docker.go:294] overlay module found
	I0224 00:48:19.058009   73480 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0224 00:48:19.059674   73480 start.go:296] selected driver: docker
	I0224 00:48:19.059689   73480 start.go:857] validating driver "docker" against &{Name:functional-304785 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-304785 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 00:48:19.059785   73480 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 00:48:19.062420   73480 out.go:177] 
	W0224 00:48:19.064126   73480 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0224 00:48:19.065733   73480 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:848: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 status
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:866: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1597: (dbg) Run:  kubectl --context functional-304785 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1603: (dbg) Run:  kubectl --context functional-304785 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-srfwp" [f2fb9cc8-d3c4-4b2c-969f-e3fb2105dbd1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-srfwp" [f2fb9cc8-d3c4-4b2c-969f-e3fb2105dbd1] Running
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.006666789s
functional_test.go:1617: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 service hello-node-connect --url
functional_test.go:1623: found endpoint for hello-node-connect: http://192.168.49.2:30946
functional_test.go:1643: http://192.168.49.2:30946: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-srfwp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30946
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.04s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1658: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 addons list
functional_test.go:1670: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3cebe815-a91b-4950-ac44-02dafe2e4494] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00807964s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-304785 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-304785 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-304785 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-304785 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [338e7f6c-d57f-4c6a-92cf-d0a48d799aed] Pending
helpers_test.go:344: "sp-pod" [338e7f6c-d57f-4c6a-92cf-d0a48d799aed] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [338e7f6c-d57f-4c6a-92cf-d0a48d799aed] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.01139957s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-304785 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-304785 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-304785 delete -f testdata/storage-provisioner/pod.yaml: (1.424226254s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-304785 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dd171043-be40-4ef3-ad99-a78e4080cc61] Pending
helpers_test.go:344: "sp-pod" [dd171043-be40-4ef3-ad99-a78e4080cc61] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dd171043-be40-4ef3-ad99-a78e4080cc61] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.006861667s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-304785 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.51s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1693: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "echo hello"
functional_test.go:1710: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh -n functional-304785 "sudo cat /home/docker/cp-test.txt"
E0224 00:47:54.585262   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 cp functional-304785:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd696698678/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh -n functional-304785 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1758: (dbg) Run:  kubectl --context functional-304785 replace --force -f testdata/mysql.yaml
E0224 00:47:56.506607   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-48m4r" [52bc3de0-0652-48b0-9bf5-ff8b767e5009] Pending
helpers_test.go:344: "mysql-888f84dd9-48m4r" [52bc3de0-0652-48b0-9bf5-ff8b767e5009] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-48m4r" [52bc3de0-0652-48b0-9bf5-ff8b767e5009] Running
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.008254302s
functional_test.go:1772: (dbg) Run:  kubectl --context functional-304785 exec mysql-888f84dd9-48m4r -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-304785 exec mysql-888f84dd9-48m4r -- mysql -ppassword -e "show databases;": exit status 1 (153.54024ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-304785 exec mysql-888f84dd9-48m4r -- mysql -ppassword -e "show databases;"
E0224 00:48:14.428566   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-304785 exec mysql-888f84dd9-48m4r -- mysql -ppassword -e "show databases;": exit status 1 (137.348124ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-304785 exec mysql-888f84dd9-48m4r -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1894: Checking for existence of /etc/test/nested/copy/10470/hosts within VM
functional_test.go:1896: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "sudo cat /etc/test/nested/copy/10470/hosts"
functional_test.go:1901: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1937: Checking for existence of /etc/ssl/certs/10470.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "sudo cat /etc/ssl/certs/10470.pem"
functional_test.go:1937: Checking for existence of /usr/share/ca-certificates/10470.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "sudo cat /usr/share/ca-certificates/10470.pem"
functional_test.go:1937: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1938: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/104702.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "sudo cat /etc/ssl/certs/104702.pem"
functional_test.go:1964: Checking for existence of /usr/share/ca-certificates/104702.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "sudo cat /usr/share/ca-certificates/104702.pem"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1965: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-304785 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1992: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "sudo systemctl is-active crio"
functional_test.go:1992: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304785 ssh "sudo systemctl is-active crio": exit status 1 (587.957819ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2253: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:493: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-304785 docker-env) && out/minikube-linux-amd64 status -p functional-304785"
E0224 00:47:53.945499   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
E0224 00:47:53.951346   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
E0224 00:47:53.961581   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
functional_test.go:493: (dbg) Done: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-304785 docker-env) && out/minikube-linux-amd64 status -p functional-304785": (1.282825614s)
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-304785 docker-env) && docker images"
E0224 00:47:55.226442   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2084: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2084: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2084: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-304785 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-304785 apply -f testdata/testsvc.yaml
E0224 00:47:59.067735   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d25b2e50-e019-48e8-a246-bba5145714f9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d25b2e50-e019-48e8-a246-bba5145714f9] Running
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.007325783s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/ServiceJSONOutput
functional_test.go:1547: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 service list -o json
functional_test.go:1552: Took "810.093103ms" to run "out/minikube-linux-amd64 -p functional-304785 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2221: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2235: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image ls --format short
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-304785 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-304785
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-304785
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image ls --format table
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-304785 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-304785 | 1fd23728a933e | 30B    |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/mysql                     | 5.7               | be16cf2d832a9 | 455MB  |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-304785 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | latest            | 3f8a00f137a0d | 142MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 2bc7edbc3cf2f | 40.7MB |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image ls --format json
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-304785 image ls --format json:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b
5eb1a455dc742b","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-304785"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.
gcr.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"1fd23728a933e369c830d180586adfead2d6051d4d592510f5f0d7fd36dbb975","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-304785"],"size":"30"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image ls --format yaml
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-304785 image ls --format yaml:
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 1fd23728a933e369c830d180586adfead2d6051d4d592510f5f0d7fd36dbb975
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-304785
size: "30"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-304785
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh pgrep buildkitd
functional_test.go:305: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304785 ssh pgrep buildkitd: exit status 1 (565.340147ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image build -t localhost/my-image:functional-304785 testdata/build
functional_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p functional-304785 image build -t localhost/my-image:functional-304785 testdata/build: (2.860809755s)
functional_test.go:317: (dbg) Stdout: out/minikube-linux-amd64 -p functional-304785 image build -t localhost/my-image:functional-304785 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in b7401dc2797b
Removing intermediate container b7401dc2797b
---> 6fc3f667d6fc
Step 3/3 : ADD content.txt /
---> f330be9f32bd
Successfully built f330be9f32bd
Successfully tagged localhost/my-image:functional-304785
functional_test.go:320: (dbg) Stderr: out/minikube-linux-amd64 -p functional-304785 image build -t localhost/my-image:functional-304785 testdata/build:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image ls
E0224 00:48:34.909332   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
2023/02/24 00:48:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:344: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-304785
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image load --daemon gcr.io/google-containers/addon-resizer:functional-304785
functional_test.go:352: (dbg) Done: out/minikube-linux-amd64 -p functional-304785 image load --daemon gcr.io/google-containers/addon-resizer:functional-304785: (3.823745348s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image load --daemon gcr.io/google-containers/addon-resizer:functional-304785
functional_test.go:362: (dbg) Done: out/minikube-linux-amd64 -p functional-304785 image load --daemon gcr.io/google-containers/addon-resizer:functional-304785: (2.501215047s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-304785 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.107.67.216 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-304785 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1272: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:237: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-304785
functional_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image load --daemon gcr.io/google-containers/addon-resizer:functional-304785
functional_test.go:242: (dbg) Done: out/minikube-linux-amd64 -p functional-304785 image load --daemon gcr.io/google-containers/addon-resizer:functional-304785: (3.273351375s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1307: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1312: Took "520.09608ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1321: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1326: Took "74.574138ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-304785 /tmp/TestFunctionalparallelMountCmdany-port1909214202/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1677199697187636245" to /tmp/TestFunctionalparallelMountCmdany-port1909214202/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1677199697187636245" to /tmp/TestFunctionalparallelMountCmdany-port1909214202/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1677199697187636245" to /tmp/TestFunctionalparallelMountCmdany-port1909214202/001/test-1677199697187636245
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304785 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (700.548834ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 24 00:48 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 24 00:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 24 00:48 test-1677199697187636245
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh cat /mount-9p/test-1677199697187636245
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-304785 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [09e55f18-24ea-4e94-83d5-b2a4c9fcb51a] Pending
helpers_test.go:344: "busybox-mount" [09e55f18-24ea-4e94-83d5-b2a4c9fcb51a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [09e55f18-24ea-4e94-83d5-b2a4c9fcb51a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [09e55f18-24ea-4e94-83d5-b2a4c9fcb51a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00568772s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-304785 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-304785 /tmp/TestFunctionalparallelMountCmdany-port1909214202/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.04s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1358: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1363: Took "727.898807ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1371: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1376: Took "54.858676ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:377: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image save gcr.io/google-containers/addon-resizer:functional-304785 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image rm gcr.io/google-containers/addon-resizer:functional-304785
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:416: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-304785
functional_test.go:421: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 image save --daemon gcr.io/google-containers/addon-resizer:functional-304785
functional_test.go:421: (dbg) Done: out/minikube-linux-amd64 -p functional-304785 image save --daemon gcr.io/google-containers/addon-resizer:functional-304785: (2.448843975s)
functional_test.go:426: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-304785
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-304785 /tmp/TestFunctionalparallelMountCmdspecific-port1457861890/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304785 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (568.393951ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-304785 /tmp/TestFunctionalparallelMountCmdspecific-port1457861890/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p functional-304785 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-304785 ssh "sudo umount -f /mount-9p": exit status 1 (577.416448ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-linux-amd64 -p functional-304785 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-304785 /tmp/TestFunctionalparallelMountCmdspecific-port1457861890/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.89s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-304785
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-304785
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-304785
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (0.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-763037
--- PASS: TestImageBuild/serial/NormalBuild (0.95s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-763037
image_test.go:94: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-763037: (1.02429022s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-763037
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.45s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-763037
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.37s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (97.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-823180 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0224 00:49:15.870372   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
E0224 00:50:37.791265   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-823180 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m37.425849516s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (97.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.16s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-823180 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-823180 addons enable ingress --alsologtostderr -v=5: (10.164585511s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.16s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-823180 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (37.16s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-823180 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-823180 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.053974096s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-823180 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-823180 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [90f999f0-d2a2-4a05-9fad-022b43262a52] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [90f999f0-d2a2-4a05-9fad-022b43262a52] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.005070264s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-823180 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-823180 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-823180 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-823180 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-823180 addons disable ingress-dns --alsologtostderr -v=1: (8.398361131s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-823180 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-823180 addons disable ingress --alsologtostderr -v=1: (7.301068186s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (37.16s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-714944 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-714944 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (44.52148693s)
--- PASS: TestJSONOutput/start/Command (44.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-714944 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-714944 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-714944 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-714944 --output=json --user=testUser: (5.97001578s)
--- PASS: TestJSONOutput/stop/Command (5.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.43s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-122580 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-122580 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (67.302834ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"73370bbd-06ce-4791-8888-ac8ce59694c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-122580] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4740de94-e071-48bc-86a5-1c278be6969c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"59802cd1-e73b-49c8-928f-3ee31113c5eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"449b0cf4-f990-4a3a-8d6d-56f61505274a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15909-3785/kubeconfig"}}
	{"specversion":"1.0","id":"19b9d63b-0b24-411e-9316-c8beb68cb4d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3785/.minikube"}}
	{"specversion":"1.0","id":"f0b4c575-e228-40df-89fe-7069db9bd869","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6cfaacb0-8d6a-4c01-bfc8-1d78d8792c45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1776d0ba-415d-41f6-9481-198c038736ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-122580" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-122580
--- PASS: TestErrorJSONOutput (0.43s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-917382 --network=
E0224 00:52:53.945252   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
E0224 00:52:54.121601   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
E0224 00:52:54.126877   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
E0224 00:52:54.137160   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
E0224 00:52:54.157436   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
E0224 00:52:54.197816   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
E0224 00:52:54.278093   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
E0224 00:52:54.438482   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
E0224 00:52:54.759071   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
E0224 00:52:55.399944   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
E0224 00:52:56.680442   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-917382 --network=: (25.231531326s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-917382" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-917382
E0224 00:52:59.241404   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-917382: (2.634624116s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.93s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-642727 --network=bridge
E0224 00:53:04.362310   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
E0224 00:53:14.602775   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
E0224 00:53:21.632131   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-642727 --network=bridge: (24.941180271s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-642727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-642727
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-642727: (2.42922923s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.44s)

                                                
                                    
x
+
TestKicExistingNetwork (28.74s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-468320 --network=existing-network
E0224 00:53:35.083417   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-468320 --network=existing-network: (25.938407715s)
helpers_test.go:175: Cleaning up "existing-network-468320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-468320
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-468320: (2.394701063s)
--- PASS: TestKicExistingNetwork (28.74s)

                                                
                                    
x
+
TestKicCustomSubnet (29.82s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-958673 --subnet=192.168.60.0/24
E0224 00:54:16.044827   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-958673 --subnet=192.168.60.0/24: (27.068214808s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-958673 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-958673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-958673
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-958673: (2.690128083s)
--- PASS: TestKicCustomSubnet (29.82s)

                                                
                                    
x
+
TestKicStaticIP (29.18s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-795187 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-795187 --static-ip=192.168.200.200: (26.249741734s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-795187 ip
helpers_test.go:175: Cleaning up "static-ip-795187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-795187
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-795187: (2.702639162s)
--- PASS: TestKicStaticIP (29.18s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (59.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-329045 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-329045 --driver=docker  --container-runtime=docker: (27.360962605s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-332319 --driver=docker  --container-runtime=docker
E0224 00:55:37.965632   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-332319 --driver=docker  --container-runtime=docker: (25.15940834s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-329045
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-332319
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-332319" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-332319
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-332319: (2.586572009s)
helpers_test.go:175: Cleaning up "first-329045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-329045
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-329045: (2.683764522s)
--- PASS: TestMinikubeProfile (59.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-980786 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0224 00:55:59.221092   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
E0224 00:55:59.226337   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
E0224 00:55:59.236587   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
E0224 00:55:59.256847   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
E0224 00:55:59.297148   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
E0224 00:55:59.377505   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
E0224 00:55:59.537880   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
E0224 00:55:59.858424   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
E0224 00:56:00.499234   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
E0224 00:56:01.779726   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-980786 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.393252696s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-980786 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-999466 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0224 00:56:04.339857   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
E0224 00:56:09.460942   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-999466 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.311671935s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-999466 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.08s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-980786 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-980786 --alsologtostderr -v=5: (2.081900504s)
--- PASS: TestMountStart/serial/DeleteFirst (2.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-999466 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.45s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-999466
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-999466: (1.388975767s)
--- PASS: TestMountStart/serial/Stop (1.39s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.97s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-999466
E0224 00:56:19.701557   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-999466: (6.967493962s)
--- PASS: TestMountStart/serial/RestartStopped (7.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-999466 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (71.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-461512 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0224 00:56:40.182273   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
E0224 00:57:21.142783   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-461512 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m10.703197249s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (71.53s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-461512 -v 3 --alsologtostderr
E0224 00:57:53.945560   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
E0224 00:57:54.121505   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-461512 -v 3 --alsologtostderr: (16.13651253s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-linux-amd64 -p multinode-461512 status --alsologtostderr: (1.138585801s)
--- PASS: TestMultiNode/serial/AddNode (17.28s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (16.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-linux-amd64 -p multinode-461512 status --output json --alsologtostderr: (1.055130519s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 cp testdata/cp-test.txt multinode-461512:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 cp multinode-461512:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3930681034/001/cp-test_multinode-461512.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 cp multinode-461512:/home/docker/cp-test.txt multinode-461512-m02:/home/docker/cp-test_multinode-461512_multinode-461512-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512-m02 "sudo cat /home/docker/cp-test_multinode-461512_multinode-461512-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 cp multinode-461512:/home/docker/cp-test.txt multinode-461512-m03:/home/docker/cp-test_multinode-461512_multinode-461512-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512-m03 "sudo cat /home/docker/cp-test_multinode-461512_multinode-461512-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 cp testdata/cp-test.txt multinode-461512-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 cp multinode-461512-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3930681034/001/cp-test_multinode-461512-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 cp multinode-461512-m02:/home/docker/cp-test.txt multinode-461512:/home/docker/cp-test_multinode-461512-m02_multinode-461512.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512 "sudo cat /home/docker/cp-test_multinode-461512-m02_multinode-461512.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 cp multinode-461512-m02:/home/docker/cp-test.txt multinode-461512-m03:/home/docker/cp-test_multinode-461512-m02_multinode-461512-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512-m03 "sudo cat /home/docker/cp-test_multinode-461512-m02_multinode-461512-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 cp testdata/cp-test.txt multinode-461512-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 cp multinode-461512-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3930681034/001/cp-test_multinode-461512-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 cp multinode-461512-m03:/home/docker/cp-test.txt multinode-461512:/home/docker/cp-test_multinode-461512-m03_multinode-461512.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512 "sudo cat /home/docker/cp-test_multinode-461512-m03_multinode-461512.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 cp multinode-461512-m03:/home/docker/cp-test.txt multinode-461512-m02:/home/docker/cp-test_multinode-461512-m03_multinode-461512-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 ssh -n multinode-461512-m02 "sudo cat /home/docker/cp-test_multinode-461512-m03_multinode-461512-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (16.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 node stop m03
E0224 00:58:21.806323   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-461512 node stop m03: (1.38027367s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-461512 status: exit status 7 (833.639752ms)

                                                
                                                
-- stdout --
	multinode-461512
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-461512-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-461512-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-461512 status --alsologtostderr: exit status 7 (829.723516ms)

                                                
                                                
-- stdout --
	multinode-461512
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-461512-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-461512-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 00:58:23.177639  181891 out.go:296] Setting OutFile to fd 1 ...
	I0224 00:58:23.178106  181891 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:58:23.178123  181891 out.go:309] Setting ErrFile to fd 2...
	I0224 00:58:23.178132  181891 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:58:23.178430  181891 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3785/.minikube/bin
	I0224 00:58:23.178782  181891 out.go:303] Setting JSON to false
	I0224 00:58:23.178823  181891 mustload.go:65] Loading cluster: multinode-461512
	I0224 00:58:23.179111  181891 notify.go:220] Checking for updates...
	I0224 00:58:23.179587  181891 config.go:182] Loaded profile config "multinode-461512": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 00:58:23.179604  181891 status.go:255] checking status of multinode-461512 ...
	I0224 00:58:23.179962  181891 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Status}}
	I0224 00:58:23.244935  181891 status.go:330] multinode-461512 host status = "Running" (err=<nil>)
	I0224 00:58:23.244958  181891 host.go:66] Checking if "multinode-461512" exists ...
	I0224 00:58:23.245186  181891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461512
	I0224 00:58:23.306665  181891 host.go:66] Checking if "multinode-461512" exists ...
	I0224 00:58:23.306951  181891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 00:58:23.306999  181891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512
	I0224 00:58:23.369451  181891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512/id_rsa Username:docker}
	I0224 00:58:23.458328  181891 ssh_runner.go:195] Run: systemctl --version
	I0224 00:58:23.461696  181891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 00:58:23.470174  181891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 00:58:23.585544  181891 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:42 SystemTime:2023-02-24 00:58:23.577175523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 00:58:23.586126  181891 kubeconfig.go:92] found "multinode-461512" server: "https://192.168.58.2:8443"
	I0224 00:58:23.586153  181891 api_server.go:165] Checking apiserver status ...
	I0224 00:58:23.586196  181891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 00:58:23.595111  181891 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2043/cgroup
	I0224 00:58:23.601822  181891 api_server.go:181] apiserver freezer: "10:freezer:/docker/8075ab3952c8c07e2d002c8a5458b9bc0c59ce90bc9690656e8d98b634ec87cd/kubepods/burstable/pod4c6cb11c2c301f276f12bb7545f0af61/7bfce1d4138f908286fbd80c375c76a29806bfa50783304b1667b657dbcd4fcb"
	I0224 00:58:23.601873  181891 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8075ab3952c8c07e2d002c8a5458b9bc0c59ce90bc9690656e8d98b634ec87cd/kubepods/burstable/pod4c6cb11c2c301f276f12bb7545f0af61/7bfce1d4138f908286fbd80c375c76a29806bfa50783304b1667b657dbcd4fcb/freezer.state
	I0224 00:58:23.607758  181891 api_server.go:203] freezer state: "THAWED"
	I0224 00:58:23.607775  181891 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0224 00:58:23.612541  181891 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0224 00:58:23.612570  181891 status.go:421] multinode-461512 apiserver status = Running (err=<nil>)
	I0224 00:58:23.612582  181891 status.go:257] multinode-461512 status: &{Name:multinode-461512 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 00:58:23.612605  181891 status.go:255] checking status of multinode-461512-m02 ...
	I0224 00:58:23.612835  181891 cli_runner.go:164] Run: docker container inspect multinode-461512-m02 --format={{.State.Status}}
	I0224 00:58:23.674359  181891 status.go:330] multinode-461512-m02 host status = "Running" (err=<nil>)
	I0224 00:58:23.674383  181891 host.go:66] Checking if "multinode-461512-m02" exists ...
	I0224 00:58:23.674645  181891 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-461512-m02
	I0224 00:58:23.738145  181891 host.go:66] Checking if "multinode-461512-m02" exists ...
	I0224 00:58:23.738421  181891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 00:58:23.738464  181891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-461512-m02
	I0224 00:58:23.801924  181891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3785/.minikube/machines/multinode-461512-m02/id_rsa Username:docker}
	I0224 00:58:23.890139  181891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 00:58:23.898476  181891 status.go:257] multinode-461512-m02 status: &{Name:multinode-461512-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0224 00:58:23.898503  181891 status.go:255] checking status of multinode-461512-m03 ...
	I0224 00:58:23.898777  181891 cli_runner.go:164] Run: docker container inspect multinode-461512-m03 --format={{.State.Status}}
	I0224 00:58:23.963483  181891 status.go:330] multinode-461512-m03 host status = "Stopped" (err=<nil>)
	I0224 00:58:23.963508  181891 status.go:343] host is not running, skipping remaining checks
	I0224 00:58:23.963517  181891 status.go:257] multinode-461512-m03 status: &{Name:multinode-461512-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-461512 node start m03 --alsologtostderr: (11.375551416s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 status
multinode_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p multinode-461512 status: (1.072497525s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (96.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-461512
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-461512
E0224 00:58:43.064337   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-461512: (22.909840895s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-461512 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-461512 --wait=true -v=8 --alsologtostderr: (1m13.880203999s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-461512
--- PASS: TestMultiNode/serial/RestartKeepsNodes (96.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-461512 node delete m03: (5.134761501s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-461512 stop: (21.707140969s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-461512 status: exit status 7 (175.323883ms)

                                                
                                                
-- stdout --
	multinode-461512
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-461512-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-461512 status --alsologtostderr: exit status 7 (170.927468ms)

                                                
                                                
-- stdout --
	multinode-461512
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-461512-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 01:00:41.452189  203842 out.go:296] Setting OutFile to fd 1 ...
	I0224 01:00:41.452359  203842 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:00:41.452368  203842 out.go:309] Setting ErrFile to fd 2...
	I0224 01:00:41.452375  203842 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:00:41.452486  203842 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3785/.minikube/bin
	I0224 01:00:41.452685  203842 out.go:303] Setting JSON to false
	I0224 01:00:41.452728  203842 mustload.go:65] Loading cluster: multinode-461512
	I0224 01:00:41.452816  203842 notify.go:220] Checking for updates...
	I0224 01:00:41.453104  203842 config.go:182] Loaded profile config "multinode-461512": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:00:41.453119  203842 status.go:255] checking status of multinode-461512 ...
	I0224 01:00:41.453528  203842 cli_runner.go:164] Run: docker container inspect multinode-461512 --format={{.State.Status}}
	I0224 01:00:41.514966  203842 status.go:330] multinode-461512 host status = "Stopped" (err=<nil>)
	I0224 01:00:41.515008  203842 status.go:343] host is not running, skipping remaining checks
	I0224 01:00:41.515018  203842 status.go:257] multinode-461512 status: &{Name:multinode-461512 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 01:00:41.515059  203842 status.go:255] checking status of multinode-461512-m02 ...
	I0224 01:00:41.515290  203842 cli_runner.go:164] Run: docker container inspect multinode-461512-m02 --format={{.State.Status}}
	I0224 01:00:41.578838  203842 status.go:330] multinode-461512-m02 host status = "Stopped" (err=<nil>)
	I0224 01:00:41.578890  203842 status.go:343] host is not running, skipping remaining checks
	I0224 01:00:41.578898  203842 status.go:257] multinode-461512-m02 status: &{Name:multinode-461512-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-461512 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0224 01:00:59.220139   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
E0224 01:01:26.905273   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-461512 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (53.13429892s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-461512 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-461512
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-461512-m02 --driver=docker  --container-runtime=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-461512-m02 --driver=docker  --container-runtime=docker: exit status 14 (68.314584ms)

                                                
                                                
-- stdout --
	* [multinode-461512-m02] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3785/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3785/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-461512-m02' is duplicated with machine name 'multinode-461512-m02' in profile 'multinode-461512'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-461512-m03 --driver=docker  --container-runtime=docker
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-461512-m03 --driver=docker  --container-runtime=docker: (26.821625532s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-461512
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-461512: exit status 80 (420.651355ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-461512
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-461512-m03 already exists in multinode-461512-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-461512-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-461512-m03: (2.665277381s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.02s)

                                                
                                    
x
+
TestPreload (121.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-558331 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0224 01:02:53.945482   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
E0224 01:02:54.121763   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-558331 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m1.938542066s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-558331 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-558331
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-558331: (10.895910669s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-558331 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-558331 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (44.788251344s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-558331 -- docker images
helpers_test.go:175: Cleaning up "test-preload-558331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-558331
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-558331: (2.762666808s)
--- PASS: TestPreload (121.81s)

                                                
                                    
x
+
TestScheduledStopUnix (102.57s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-961952 --memory=2048 --driver=docker  --container-runtime=docker
E0224 01:04:16.992457   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-961952 --memory=2048 --driver=docker  --container-runtime=docker: (28.380061083s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-961952 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-961952 -n scheduled-stop-961952
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-961952 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-961952 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-961952 -n scheduled-stop-961952
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-961952
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-961952 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-961952
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-961952: exit status 7 (115.008538ms)

                                                
                                                
-- stdout --
	scheduled-stop-961952
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-961952 -n scheduled-stop-961952
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-961952 -n scheduled-stop-961952: exit status 7 (113.925707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-961952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-961952
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-961952: (2.220697222s)
--- PASS: TestScheduledStopUnix (102.57s)

                                                
                                    
x
+
TestSkaffold (59.86s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1026017350 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-978232 --memory=2600 --driver=docker  --container-runtime=docker
E0224 01:05:59.219850   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-978232 --memory=2600 --driver=docker  --container-runtime=docker: (25.978186766s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1026017350 run --minikube-profile skaffold-978232 --kube-context skaffold-978232 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1026017350 run --minikube-profile skaffold-978232 --kube-context skaffold-978232 --status-check=true --port-forward=false --interactive=false: (20.387373005s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6f9d994c66-xmqzm" [7a83a634-7d21-4a47-bae3-36da850e6372] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.01101391s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-69654455c4-s9nv5" [fef7af2e-2ff6-450d-ad0c-4616c733b395] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005819473s
helpers_test.go:175: Cleaning up "skaffold-978232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-978232
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-978232: (2.8536825s)
--- PASS: TestSkaffold (59.86s)

                                                
                                    
x
+
TestInsufficientStorage (12.92s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-441232 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-441232 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.756165364s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4941952f-9f3c-4c95-aa06-23dce211449d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-441232] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"57458280-d873-4bfb-8a98-3ec7226ab8f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"61493923-3ef8-43ce-aa42-a2cc987dc98b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0ac22e1d-b013-4ede-8a3f-a12022adca2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15909-3785/kubeconfig"}}
	{"specversion":"1.0","id":"c8635d68-dab7-42ce-9fd6-8ed267a7c3b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3785/.minikube"}}
	{"specversion":"1.0","id":"739c6748-956f-4ef4-99c1-007c302e52e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"94df06d6-ecf1-471d-89ae-aab6f4296523","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"55437a33-bdf2-40e1-8748-8a65333992c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"72253192-947c-4177-8f81-b19a8fb82276","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"255952db-0a27-43d0-bc27-6820378b15b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cbd7638d-032d-48dd-84e2-53610e2fbe78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"43d06e99-c3e7-48f1-9806-6a507d6d8591","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-441232 in cluster insufficient-storage-441232","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"851135a0-adb9-4e3b-8981-9e90636faa87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb59eb33-8f35-4188-849f-d56403b53f33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"697eb0ca-cf94-4485-9270-8046ef8a62b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-441232 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-441232 --output=json --layout=cluster: exit status 7 (455.271294ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-441232","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-441232","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 01:07:07.426384  252170 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-441232" does not appear in /home/jenkins/minikube-integration/15909-3785/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-441232 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-441232 --output=json --layout=cluster: exit status 7 (453.618466ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-441232","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-441232","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 01:07:07.880769  252369 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-441232" does not appear in /home/jenkins/minikube-integration/15909-3785/kubeconfig
	E0224 01:07:07.888412  252369 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/insufficient-storage-441232/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-441232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-441232
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-441232: (2.253235464s)
--- PASS: TestInsufficientStorage (12.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (75.61s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.9.0.2036740891.exe start -p running-upgrade-674284 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.9.0.2036740891.exe start -p running-upgrade-674284 --memory=2200 --vm-driver=docker  --container-runtime=docker: (53.806943269s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-674284 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-674284 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (18.498461341s)
helpers_test.go:175: Cleaning up "running-upgrade-674284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-674284
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-674284: (2.738682613s)
--- PASS: TestRunningBinaryUpgrade (75.61s)

                                                
                                    
x
+
TestKubernetesUpgrade (368.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-673826 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0224 01:07:53.945180   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
E0224 01:07:54.121865   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-673826 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.886695621s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-673826
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-673826: (12.771060204s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-673826 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-673826 status --format={{.Host}}: exit status 7 (234.516171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-673826 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:251: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-673826 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m34.957602135s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-673826 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-673826 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-673826 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (71.210123ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-673826] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3785/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3785/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-673826
	    minikube start -p kubernetes-upgrade-673826 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6738262 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-673826 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-673826 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:283: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-673826 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.112984157s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-673826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-673826
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-673826: (2.971381687s)
--- PASS: TestKubernetesUpgrade (368.06s)

                                                
                                    
x
+
TestMissingContainerUpgrade (139.33s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /tmp/minikube-v1.9.1.726157253.exe start -p missing-upgrade-913841 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:317: (dbg) Done: /tmp/minikube-v1.9.1.726157253.exe start -p missing-upgrade-913841 --memory=2200 --driver=docker  --container-runtime=docker: (1m21.877678173s)
version_upgrade_test.go:326: (dbg) Run:  docker stop missing-upgrade-913841
version_upgrade_test.go:326: (dbg) Done: docker stop missing-upgrade-913841: (10.440813067s)
version_upgrade_test.go:331: (dbg) Run:  docker rm missing-upgrade-913841
version_upgrade_test.go:337: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-913841 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:337: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-913841 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.001868684s)
helpers_test.go:175: Cleaning up "missing-upgrade-913841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-913841
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-913841: (4.489569749s)
--- PASS: TestMissingContainerUpgrade (139.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /tmp/minikube-v1.9.0.1596659089.exe start -p stopped-upgrade-885619 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:191: (dbg) Done: /tmp/minikube-v1.9.0.1596659089.exe start -p stopped-upgrade-885619 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m13.843452756s)
version_upgrade_test.go:200: (dbg) Run:  /tmp/minikube-v1.9.0.1596659089.exe -p stopped-upgrade-885619 stop
version_upgrade_test.go:200: (dbg) Done: /tmp/minikube-v1.9.0.1596659089.exe -p stopped-upgrade-885619 stop: (2.612493421s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-885619 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-885619 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.397079581s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (99.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-885619
version_upgrade_test.go:214: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-885619: (1.522104575s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.52s)

                                                
                                    
x
+
TestPause/serial/Start (58.1s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-139964 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-139964 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (58.100696372s)
--- PASS: TestPause/serial/Start (58.10s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.07s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-139964 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-139964 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.059124186s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-268472 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-268472 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (73.957995ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-268472] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3785/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3785/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-268472 --driver=docker  --container-runtime=docker
E0224 01:10:59.220081   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-268472 --driver=docker  --container-runtime=docker: (25.430809079s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-268472 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.97s)

                                                
                                    
x
+
TestPause/serial/Pause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-139964 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.69s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.58s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-139964 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-139964 --output=json --layout=cluster: exit status 2 (584.57204ms)

                                                
                                                
-- stdout --
	{"Name":"pause-139964","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-139964","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.58s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-139964 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-139964 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.87s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-139964 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-139964 --alsologtostderr -v=5: (2.868214831s)
--- PASS: TestPause/serial/DeletePaused (2.87s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-139964
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-139964: exit status 1 (62.235811ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-139964: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-268472 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-268472 --no-kubernetes --driver=docker  --container-runtime=docker: (14.055420084s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-268472 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-268472 status -o json: exit status 2 (482.527111ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-268472","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-268472
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-268472: (2.375898078s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-268472 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-268472 --no-kubernetes --driver=docker  --container-runtime=docker: (6.822327054s)
--- PASS: TestNoKubernetes/serial/Start (6.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-268472 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-268472 "sudo systemctl is-active --quiet service kubelet": exit status 1 (508.690866ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (19.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (18.242962314s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0224 01:11:54.585696   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.271120349s)
--- PASS: TestNoKubernetes/serial/ProfileList (19.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (123.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-181674 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0224 01:11:44.420546   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
E0224 01:11:44.501441   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
E0224 01:11:44.661931   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
E0224 01:11:44.982805   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
E0224 01:11:45.623231   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
E0224 01:11:46.904118   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
E0224 01:11:49.464693   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-181674 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m3.71851257s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (123.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-268472
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-268472: (1.427863776s)
--- PASS: TestNoKubernetes/serial/Stop (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-268472 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-268472 --driver=docker  --container-runtime=docker: (7.539147129s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-268472 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-268472 "sudo systemctl is-active --quiet service kubelet": exit status 1 (517.705291ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-103343 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0224 01:12:22.266044   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
E0224 01:12:25.307606   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
E0224 01:12:53.945250   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
E0224 01:12:54.121758   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-103343 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (51.173668651s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-103343 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [96cc837d-8323-48a1-91e7-84af0e4a1dcb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [96cc837d-8323-48a1-91e7-84af0e4a1dcb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.013230334s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-103343 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-103343 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-103343 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-103343 --alsologtostderr -v=3
E0224 01:13:06.267763   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-103343 --alsologtostderr -v=3: (10.943060128s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-103343 -n embed-certs-103343
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-103343 -n embed-certs-103343: exit status 7 (131.056675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-103343 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (563.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-103343 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-103343 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (9m23.277021884s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-103343 -n embed-certs-103343
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (563.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-496607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-496607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (1m3.359163742s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (47.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-335632 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-335632 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (47.621959292s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (47.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-181674 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9c09a6b1-c36d-452b-aebd-1700004198f5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9c09a6b1-c36d-452b-aebd-1700004198f5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.012256694s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-181674 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-181674 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-181674 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-181674 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-181674 --alsologtostderr -v=3: (11.087101577s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181674 -n old-k8s-version-181674
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181674 -n old-k8s-version-181674: exit status 7 (120.920989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-181674 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (337.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-181674 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-181674 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (5m36.819641268s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-181674 -n old-k8s-version-181674
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (337.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-335632 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [32e63196-2369-475c-ba18-daf66bca70b3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [32e63196-2369-475c-ba18-daf66bca70b3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.014566016s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-335632 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-496607 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c7edbe99-71bb-4f78-beb6-193e3e2202b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0224 01:14:28.188159   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c7edbe99-71bb-4f78-beb6-193e3e2202b5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.012420224s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-496607 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-335632 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-335632 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-335632 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-335632 --alsologtostderr -v=3: (11.063965532s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-496607 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-496607 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-496607 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-496607 --alsologtostderr -v=3: (10.821076851s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-335632 -n default-k8s-diff-port-335632
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-335632 -n default-k8s-diff-port-335632: exit status 7 (124.356603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-335632 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (557.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-335632 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-335632 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (9m16.949570881s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-335632 -n default-k8s-diff-port-335632
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (557.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-496607 -n no-preload-496607
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-496607 -n no-preload-496607: exit status 7 (119.204171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-496607 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (561.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-496607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0224 01:15:59.219430   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
E0224 01:16:44.344717   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
E0224 01:17:12.028534   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
E0224 01:17:53.945220   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
E0224 01:17:54.121710   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-496607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (9m21.200187763s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-496607 -n no-preload-496607
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (561.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9hvbr" [843d5194-005c-4b2b-a840-6a5df8f70b0c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011284398s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9hvbr" [843d5194-005c-4b2b-a840-6a5df8f70b0c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00625814s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-181674 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-181674 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-181674 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-181674 -n old-k8s-version-181674
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-181674 -n old-k8s-version-181674: exit status 2 (582.878764ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-181674 -n old-k8s-version-181674
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-181674 -n old-k8s-version-181674: exit status 2 (577.759003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-181674 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-181674 -n old-k8s-version-181674
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-181674 -n old-k8s-version-181674
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-122549 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-122549 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (42.300187221s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-122549 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-122549 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-122549 --alsologtostderr -v=3: (5.832587695s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-122549 -n newest-cni-122549
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-122549 -n newest-cni-122549: exit status 7 (115.87127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-122549 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (27.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-122549 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0224 01:20:56.993449   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
E0224 01:20:59.219717   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-122549 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (27.417175344s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-122549 -n newest-cni-122549
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (27.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-122549 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-122549 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-122549 -n newest-cni-122549
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-122549 -n newest-cni-122549: exit status 2 (501.928432ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-122549 -n newest-cni-122549
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-122549 -n newest-cni-122549: exit status 2 (495.125107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-122549 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-122549 -n newest-cni-122549
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-122549 -n newest-cni-122549
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (49.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0224 01:21:44.344488   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/skaffold-978232/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (49.419254441s)
--- PASS: TestNetworkPlugins/group/auto/Start (49.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-411006 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-411006 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-lthdr" [19efe522-250d-4f1f-9e18-064ea9531057] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-lthdr" [19efe522-250d-4f1f-9e18-064ea9531057] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005558138s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-411006 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-k8ffc" [64a4174a-2f73-42ce-8e3d-5dd4946b2e34] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012493453s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-k8ffc" [64a4174a-2f73-42ce-8e3d-5dd4946b2e34] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006418167s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-103343 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-103343 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-103343 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-103343 -n embed-certs-103343
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-103343 -n embed-certs-103343: exit status 2 (556.50861ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-103343 -n embed-certs-103343
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-103343 -n embed-certs-103343: exit status 2 (558.603659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-103343 --alsologtostderr -v=1
E0224 01:22:53.945447   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/addons-905638/client.crt: no such file or directory
E0224 01:22:54.121568   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-103343 -n embed-certs-103343
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-103343 -n embed-certs-103343
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (57.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (57.367820991s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (57.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0224 01:23:48.358806   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/old-k8s-version-181674/client.crt: no such file or directory
E0224 01:23:48.364090   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/old-k8s-version-181674/client.crt: no such file or directory
E0224 01:23:48.374354   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/old-k8s-version-181674/client.crt: no such file or directory
E0224 01:23:48.394617   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/old-k8s-version-181674/client.crt: no such file or directory
E0224 01:23:48.434854   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/old-k8s-version-181674/client.crt: no such file or directory
E0224 01:23:48.515188   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/old-k8s-version-181674/client.crt: no such file or directory
E0224 01:23:48.675857   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/old-k8s-version-181674/client.crt: no such file or directory
E0224 01:23:48.996466   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/old-k8s-version-181674/client.crt: no such file or directory
E0224 01:23:49.636906   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/old-k8s-version-181674/client.crt: no such file or directory
E0224 01:23:50.917708   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/old-k8s-version-181674/client.crt: no such file or directory
E0224 01:23:53.478756   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/old-k8s-version-181674/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m16.111911368s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-pdjr8" [42717e95-6a19-4f1a-8f62-426d728f693a] Running
E0224 01:23:58.599167   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/old-k8s-version-181674/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.017363993s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-411006 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-411006 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-r4p6c" [e28106cf-cfc2-4538-a420-f1b20b99f0dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-r4p6c" [e28106cf-cfc2-4538-a420-f1b20b99f0dd] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005458741s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-9h2hc" [99980648-00e4-4a7b-91b2-3bc21d2d01a0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012309336s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-9h2hc" [99980648-00e4-4a7b-91b2-3bc21d2d01a0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006467433s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-335632 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-411006 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0224 01:24:08.839505   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/old-k8s-version-181674/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-m8rjv" [d49d695e-d7f6-492e-ab2b-82719a2bcb16] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013114487s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-335632 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-335632 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-335632 -n default-k8s-diff-port-335632
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-335632 -n default-k8s-diff-port-335632: exit status 2 (489.700952ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-335632 -n default-k8s-diff-port-335632
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-335632 -n default-k8s-diff-port-335632: exit status 2 (508.582875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-335632 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-335632 -n default-k8s-diff-port-335632
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-335632 -n default-k8s-diff-port-335632
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4jccl" [e2fd2f64-a4e1-4481-bfe8-9968dda79229] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.018507395s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-m8rjv" [d49d695e-d7f6-492e-ab2b-82719a2bcb16] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007851224s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-496607 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m4.245443171s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-411006 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-496607 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-411006 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-dfc7s" [dc947af8-7394-4bad-8180-1625a59ec2ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-dfc7s" [dc947af8-7394-4bad-8180-1625a59ec2ce] Running
E0224 01:24:29.319903   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/old-k8s-version-181674/client.crt: no such file or directory
E0224 01:24:29.345216   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/default-k8s-diff-port-335632/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.006027303s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-496607 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-496607 -n no-preload-496607
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-496607 -n no-preload-496607: exit status 2 (677.158551ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-496607 -n no-preload-496607
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-496607 -n no-preload-496607: exit status 2 (680.929208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-496607 --alsologtostderr -v=1
E0224 01:24:24.044619   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/default-k8s-diff-port-335632/client.crt: no such file or directory
E0224 01:24:24.050132   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/default-k8s-diff-port-335632/client.crt: no such file or directory
E0224 01:24:24.060390   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/default-k8s-diff-port-335632/client.crt: no such file or directory
E0224 01:24:24.080661   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/default-k8s-diff-port-335632/client.crt: no such file or directory
E0224 01:24:24.121059   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/default-k8s-diff-port-335632/client.crt: no such file or directory
E0224 01:24:24.201390   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/default-k8s-diff-port-335632/client.crt: no such file or directory
E0224 01:24:24.361627   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/default-k8s-diff-port-335632/client.crt: no such file or directory
E0224 01:24:24.683226   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/default-k8s-diff-port-335632/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-496607 --alsologtostderr -v=1: (1.115837399s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-496607 -n no-preload-496607
E0224 01:24:25.323382   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/default-k8s-diff-port-335632/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-496607 -n no-preload-496607
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (46.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p false-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p false-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (46.967111545s)
--- PASS: TestNetworkPlugins/group/false/Start (46.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-411006 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0224 01:24:34.465988   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/default-k8s-diff-port-335632/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0224 01:24:44.706145   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/default-k8s-diff-port-335632/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m25.850985882s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0224 01:25:05.187037   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/default-k8s-diff-port-335632/client.crt: no such file or directory
E0224 01:25:10.280617   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/old-k8s-version-181674/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m0.809107797s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-411006 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-411006 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-ts5xn" [2b930458-b508-41dc-9c45-5bb4ba635f96] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-ts5xn" [2b930458-b508-41dc-9c45-5bb4ba635f96] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.008452798s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-411006 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-411006 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-d525c" [b149eef0-de2d-471a-8635-005c73f60cf1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-d525c" [b149eef0-de2d-471a-8635-005c73f60cf1] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.005148311s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-411006 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-411006 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (59.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0224 01:25:57.167563   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/functional-304785/client.crt: no such file or directory
E0224 01:25:59.219915   10470 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/ingress-addon-legacy-823180/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (59.930825951s)
--- PASS: TestNetworkPlugins/group/bridge/Start (59.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-411006 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (82.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-411006 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m22.175227258s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (82.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9wxvv" [7d94da4c-0361-4574-88ce-67d4c247ed48] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.016199769s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-411006 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-kx5pc" [d670fbc4-5380-469e-aa21-738574f3fa4c] Pending
helpers_test.go:344: "netcat-694fc96674-kx5pc" [d670fbc4-5380-469e-aa21-738574f3fa4c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-kx5pc" [d670fbc4-5380-469e-aa21-738574f3fa4c] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00606586s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-411006 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-411006 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-zswhx" [ee245eae-5d8a-4ae6-9fbf-b8dfa9ee07f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-zswhx" [ee245eae-5d8a-4ae6-9fbf-b8dfa9ee07f3] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.006711638s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-411006 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-411006 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-411006 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-411006 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-4tx67" [9162498c-4647-4e40-b7d4-5b7e699962e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-4tx67" [9162498c-4647-4e40-b7d4-5b7e699962e0] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.006159884s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-411006 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-411006 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-411006 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-sx79d" [4a8cde40-560d-4b87-82c2-5d569476994d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-sx79d" [4a8cde40-560d-4b87-82c2-5d569476994d] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.005413055s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-411006 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-411006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                    

Test skip (19/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:544: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-923378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-923378
--- SKIP: TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-411006 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-411006" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-411006" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-411006" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-411006" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-411006" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-411006" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-411006" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-411006" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-411006" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-411006" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-411006" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-411006" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-411006" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-411006" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-411006" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 24 Feb 2023 01:11:09 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-268472
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 24 Feb 2023 01:09:59 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-054120
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15909-3785/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 24 Feb 2023 01:08:36 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-673826
contexts:
- context:
cluster: NoKubernetes-268472
extensions:
- extension:
last-update: Fri, 24 Feb 2023 01:11:09 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: NoKubernetes-268472
name: NoKubernetes-268472
- context:
cluster: cert-expiration-054120
extensions:
- extension:
last-update: Fri, 24 Feb 2023 01:09:59 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: cert-expiration-054120
name: cert-expiration-054120
- context:
cluster: kubernetes-upgrade-673826
user: kubernetes-upgrade-673826
name: kubernetes-upgrade-673826
current-context: NoKubernetes-268472
kind: Config
preferences: {}
users:
- name: NoKubernetes-268472
user:
client-certificate: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/NoKubernetes-268472/client.crt
client-key: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/NoKubernetes-268472/client.key
- name: cert-expiration-054120
user:
client-certificate: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/cert-expiration-054120/client.crt
client-key: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/cert-expiration-054120/client.key
- name: kubernetes-upgrade-673826
user:
client-certificate: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/kubernetes-upgrade-673826/client.crt
client-key: /home/jenkins/minikube-integration/15909-3785/.minikube/profiles/kubernetes-upgrade-673826/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-411006

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-411006" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411006"

                                                
                                                
----------------------- debugLogs end: cilium-411006 [took: 3.671189825s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-411006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-411006
--- SKIP: TestNetworkPlugins/group/cilium (4.14s)

                                                
                                    
Copied to clipboard